Next Issue
Volume 14, May
Previous Issue
Volume 14, March
 
 

Computation, Volume 14, Issue 4 (April 2026) – 21 articles

Cover Story (view full-size image): This study presents a comprehensive comparative analysis of supervised and unsupervised learning paradigms for network intrusion detection using real-world institutional logs. As cyber threats become increasingly sophisticated, the ability to distinguish rare anomalies from normal traffic is paramount. Our research evaluates a diverse range of machine learning and deep learning architectures, examining the trade-offs between labeled data dependency and autonomous anomaly reconstruction. The findings reveal that supervised approaches significantly outperform unsupervised methods in complex network environments, particularly in minimizing false negatives. By providing a robust framework for model selection, this study offers critical insights for developing resilient, AI-driven security systems capable of protecting modern digital infrastructures. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 6124 KB  
Article
SOC-Dependent Soft Current Limiting for Second-Life Lithium-Ion Batteries in Off-Grid Photovoltaic Battery Energy Storage Systems
by Hongyan Wang, Pathomthat Chiradeja, Atthapol Ngaopitakkul and Suntiti Yoomak
Computation 2026, 14(4), 95; https://doi.org/10.3390/computation14040095 - 19 Apr 2026
Viewed by 403
Abstract
The increasing deployment of off-grid photovoltaic–battery energy storage systems (PV–BESSs) has intensified operational demands on battery energy storage, particularly when second-life lithium-ion batteries are employed. Due to aging-induced increases in internal resistance and reduced thermal margins, second-life batteries are more vulnerable to high-current [...] Read more.
The increasing deployment of off-grid photovoltaic–battery energy storage systems (PV–BESSs) has intensified operational demands on battery energy storage, particularly when second-life lithium-ion batteries are employed. Due to aging-induced increases in internal resistance and reduced thermal margins, second-life batteries are more vulnerable to high-current operation at a low state-of-charge (SOC), which aggravates heat generation and accelerates degradation. In this study, an SOC-dependent soft current limiting strategy is proposed that reshapes the discharge current reference under low-SOC conditions while maintaining fixed SOC limits, thereby targeting current-domain protection rather than SOC-boundary adaptation for reliable off-grid operation. The proposed method introduces two SOC thresholds to gradually derate the allowable discharge current, preventing abrupt current changes near the lower SOC bound. A unified MATLAB/Simulink-based framework is developed for a 24 h representative off-grid PV–BESS scenario using a second-order equivalent circuit model coupled with a lumped thermal model. Simulation results show that the proposed current shaping reduces low-SOC current stress and associated Joule heating, leading to moderated temperature rise, while only slightly affecting the unmet load under the tested conditions. These findings indicate that SOC-dependent current shaping can provide a control-oriented means to reduce low-SOC electro-thermal stress in second-life batteries within the studied off-grid PV–BESS framework. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

19 pages, 2605 KB  
Article
Sequential H2 Adsorption on the Aromatic Li6 Superatom: Field-Activated Physisorption and Thermodynamic Limits
by Karen Ochoa Lara, Jancarlo Gomez-Vega, Rafael Pacheco-Contreras and Octavio Juárez-Sánchez
Computation 2026, 14(4), 94; https://doi.org/10.3390/computation14040094 - 17 Apr 2026
Viewed by 246
Abstract
Understanding the intrinsic Li–H2 interaction, decoupled from substrate effects, is essential to rationalize the performance of lithium-decorated hydrogen storage materials. To address the current lack of a clean theoretical baseline, we characterized the sequential H2 adsorption on the gas-phase Li6 [...] Read more.
Understanding the intrinsic Li–H2 interaction, decoupled from substrate effects, is essential to rationalize the performance of lithium-decorated hydrogen storage materials. To address the current lack of a clean theoretical baseline, we characterized the sequential H2 adsorption on the gas-phase Li6 superatomic cluster using high-level density functional theory (DFT), complemented by Energy Decomposition Analysis (EDA), QTAIM, and NICS(0) calculations. Li6 acts as a structurally rigid platform (RMSD < 0.032 Å) where ligand-induced polarization progressively strengthens its σ-aromaticity (NICS(0) from −2.917 to −13.98 ppm) and increases the HOMO–LUMO gap up to 5.05 eV. EDA identifies the binding as field-activated physisorption, electrostatically dominated (65–67%) and mechanistically distinct from Kubas coordination, as confirmed by QTAIM closed-shell interaction parameters. Negative cooperativity governs an effective loading capacity of n = 2 molecules under cryogenic conditions (Teq = 143.76 and 114.64 K), while an entropic bottleneck renders higher loading non-spontaneous at all temperatures. These results establish Li6(H2)n as a foundational gas-phase reference, providing a systematic, contamination-free descriptor set for the intrinsic Li–H2 interaction. This framework is essential for isolating the electronic role of the lithium superatom and unambiguously identifying substrate-induced modulations in supported hydrogen storage materials. Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
Show Figures

Graphical abstract

22 pages, 1998 KB  
Article
Attention-Based Transformer Framework with Predictive Uncertainty Quantification for Multi-Crop Yield Forecasting
by Bharat Lal, Abhinav Shukla, Ayush Kumar Agrawal, R Kanesaraj Ramasamy and Parul Dubey
Computation 2026, 14(4), 93; https://doi.org/10.3390/computation14040093 - 15 Apr 2026
Viewed by 421
Abstract
Accurate crop yield forecasting is essential for ensuring food security, optimizing agricultural resource allocation, and supporting climate-resilient farming systems. Recent advances in deep learning have improved yield prediction accuracy; however, most existing models provide deterministic estimates without quantifying predictive uncertainty. This limitation restricts [...] Read more.
Accurate crop yield forecasting is essential for ensuring food security, optimizing agricultural resource allocation, and supporting climate-resilient farming systems. Recent advances in deep learning have improved yield prediction accuracy; however, most existing models provide deterministic estimates without quantifying predictive uncertainty. This limitation restricts their reliability under climatic variability, missing data, and real-world decision-making scenarios where risk awareness is critical. This study utilizes two publicly available multi-crop datasets comprising historical yield records integrated with weather and soil attributes across multiple growing seasons. An attention-based Transformer framework is proposed, augmented with uncertainty quantification through Monte Carlo Dropout, Quantile Regression, and Bayesian Attention mechanisms. The proposed approach represents an integrated uncertainty-aware Transformer framework that combines temporal self-attention with complementary uncertainty estimation strategies. The contribution of this work lies in the systematic integration and comparative evaluation of multiple uncertainty quantification mechanisms within a unified deep learning framework for multi-crop yield forecasting. Experimental results demonstrate improved predictive accuracy and calibration compared to deterministic baselines. However, these findings are bounded by the scope of the datasets, which consist of coarse tabular climatic and soil variables, and should be interpreted accordingly. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

27 pages, 2093 KB  
Article
Comparative Analysis of Supervised and Unsupervised Learning for Intrusion Detection in Network Logs
by Paulo Castro, Fernando Santos and Pedro Lopes
Computation 2026, 14(4), 92; https://doi.org/10.3390/computation14040092 - 15 Apr 2026
Viewed by 510
Abstract
The escalating complexity of network infrastructures and the increasing sophistication of cyber threats require increasingly robust and automated Intrusion Detection Systems (IDS). This article presents a comparative investigation of the effectiveness of various Machine Learning and Deep Learning architectures in detecting network anomalies [...] Read more.
The escalating complexity of network infrastructures and the increasing sophistication of cyber threats require increasingly robust and automated Intrusion Detection Systems (IDS). This article presents a comparative investigation of the effectiveness of various Machine Learning and Deep Learning architectures in detecting network anomalies in network logs. The methodology encompassed classic supervised and ensemble algorithms, such as Random Forest and XGBoost, to sequential Deep Learning approaches (LSTM, GRU) and unsupervised models based on latent reconstruction (VAE, DeepLog). The results demonstrate that supervised approaches significantly outperformed unsupervised methods in the analyzed context. The optimized XGBoost model established a performance benchmark, achieving a Recall of 0.96 and a Precision of 0.85, thereby offering an optimal balance between detecting rare threats and minimizing false alarms. In contrast, unsupervised models revealed critical limitations, suggesting that statistical mimicry between normal and anomalous traffic hinders detection based solely on reconstruction error. Additionally, the study documents the technical interoperability challenges when attempting to integrate state-of-the-art language models, such as BERT. In conclusion, this work validates the effectiveness of Gradient Boosting algorithms and recurrent networks as viable and scalable solutions for critical network security, providing guidelines for model selection in real monitoring environments. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

19 pages, 1237 KB  
Article
Reinforcement Learning-Based Inverse Design of Multilayer Particles
by Zhaohui Li, Fang Gao and Delian Liu
Computation 2026, 14(4), 91; https://doi.org/10.3390/computation14040091 - 10 Apr 2026
Viewed by 409
Abstract
Multilayered particles possess exceptional optical properties and hold significant potential for applications in chemical analysis, life sciences, optical sensing, and photonic integration. In practical applications, however, it is often necessary to perform inverse design of multilayered particles with given optical characteristics to meet [...] Read more.
Multilayered particles possess exceptional optical properties and hold significant potential for applications in chemical analysis, life sciences, optical sensing, and photonic integration. In practical applications, however, it is often necessary to perform inverse design of multilayered particles with given optical characteristics to meet specific requirements, a process that remains time-consuming. To overcome this challenge, we propose a reinforcement learning-based method for the automated design of multilayered particles. Leveraging the self-learning capacity of reinforcement learning models in combination with an optical characteristics calculation model, the method iteratively determines particle parameters that fulfill the desired optical responses. This method effectively addresses the many-to-one parameter mapping problem in inverse design, eliminates the need for extensive pre-computations, and provides an innovative approach to the automated design of complex nanostructures. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

17 pages, 4841 KB  
Article
Two-Dimensional Anomalous Solute Transport in a Two-Zone Fractal Porous Medium
by B. Kh. Khuzhayorov, F. B. Kholliev, A. I. Usmonov, B. Rushi Kumar and K. K. Viswanathan
Computation 2026, 14(4), 90; https://doi.org/10.3390/computation14040090 - 9 Apr 2026
Viewed by 273
Abstract
This study addresses a two-dimensional anomalous solute transport process within a two-zone fractal porous medium. A mathematical formulation is developed to characterise transport phenomena in a non-homogeneous porous domain. The medium consists of two interacting regions: one containing mobile fluid and the other [...] Read more.
This study addresses a two-dimensional anomalous solute transport process within a two-zone fractal porous medium. A mathematical formulation is developed to characterise transport phenomena in a non-homogeneous porous domain. The medium consists of two interacting regions: one containing mobile fluid and the other containing immobile fluid, between which mass transfer occurs. In the mobile-fluid region, solute transport is governed by the convection–diffusion equation. In contrast, the immobile-fluid region is described using a first-order kinetic model. The problem of solute injection through a designated boundary point is formulated and numerically implemented. The effects of anomalous transport behaviour on solute migration and filtration characteristics are examined. The study further evaluates the pressure field, filtration velocity distribution, and solute concentration in both zones. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

39 pages, 10346 KB  
Article
Feature-Based Population Initialization for Evolutionary Optimization of Machine Learning Models in Short-Term Solar Power Forecasting
by Aleksei Vakhnin, Harri Niska, Anders V. Lindfors and Mikko Kolehmainen
Computation 2026, 14(4), 89; https://doi.org/10.3390/computation14040089 - 8 Apr 2026
Viewed by 460
Abstract
Nowadays, solar energy is becoming one of the most popular sources of renewable energy worldwide. Traditional fossil fuels cause pollution and climate change, while solar power offers a clean and sustainable alternative. However, effective planning requires accurate prediction of the amount of solar [...] Read more.
Nowadays, solar energy is becoming one of the most popular sources of renewable energy worldwide. Traditional fossil fuels cause pollution and climate change, while solar power offers a clean and sustainable alternative. However, effective planning requires accurate prediction of the amount of solar energy that can be produced. Prediction accuracy directly depends on two factors: the model’s hyperparameters and the feature set. In this study, we use boosting models, such as LightGBM, XGBoost, and CatBoost, to forecast solar power production. The prediction horizon is 60 min, which corresponds to short-term forecasting. Model tuning is performed using the NSGA-II multi-objective optimization algorithm. In this study, NSGA-II simultaneously tunes hyperparameters and a feature set of boosting models. We aim to enhance the performance of the NSGA-II algorithm in the early stages using the proposed method to generate the initial population. The initialization is based on an ensemble of filtering methods. The proposed approach promotes faster convergence in the early stages of the algorithm compared to the traditional initialization method. The results of numerical experiments are proven by the Wilcoxon test. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

22 pages, 799 KB  
Article
A Comparative Study of Imbalance-Handling Methods in Multiclass Predictive Maintenance
by Mohammed Alnahhal, Mosab I. Tabash, Samir K. Safi, Mujeeb Saif Mohsen Al-Absy and Zokir Mamadiyarov
Computation 2026, 14(4), 88; https://doi.org/10.3390/computation14040088 - 7 Apr 2026
Viewed by 460
Abstract
Predictive maintenance plays a key role in digitalization initiatives; however, in real settings, issues related to failure prediction occur when failure instances are rare compared to normal instances, leading to class imbalance. In this study, we systematically compare five machine learning (ML) models—random [...] Read more.
Predictive maintenance plays a key role in digitalization initiatives; however, in real settings, issues related to failure prediction occur when failure instances are rare compared to normal instances, leading to class imbalance. In this study, we systematically compare five machine learning (ML) models—random forest, XGBoost, support vector machine, k-nearest neighbors, and multinomial logistic regression (MLR)—to detect multiclass rare failures using four imbalance-handling approaches (i.e., no handling, manual oversampling, selective manual oversampling, and class weighting), forming 20 configurations. Using the AI4I 2020 predictive maintenance dataset, which contains five failure types, we determined that XGBoost with no handling achieved the highest macro-averaged F1 (macro-F1) score (0.842) but obtained 0% recall for tool wear failure (TWF). MLR with selective manual oversampling achieved approximately 50% TWF recall with lower overall performance (0.636 macro-F1) than top-performing models such as XGBoost. We also found that very rare classes remain difficult to detect. Even high-performing models fail to consistently detect all five failure types. Overall, no single strategy can achieve a high detection rate across all performance measures. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

36 pages, 5031 KB  
Article
Spatiotemporal Modelling of CAR-T Cell Therapy in Solid Tumours: Mechanisms of Antigen Escape and Immunosuppression
by Maxim Polyakov
Computation 2026, 14(4), 87; https://doi.org/10.3390/computation14040087 - 7 Apr 2026
Viewed by 377
Abstract
CAR-T cell therapy has shown substantial efficacy in haematological malignancies, but its application to solid tumours remains limited by poor effector-cell infiltration, functional exhaustion, antigenic heterogeneity, and an immunosuppressive microenvironment. In this study, we develop a new spatiotemporal mathematical model of CAR-T therapy [...] Read more.
CAR-T cell therapy has shown substantial efficacy in haematological malignancies, but its application to solid tumours remains limited by poor effector-cell infiltration, functional exhaustion, antigenic heterogeneity, and an immunosuppressive microenvironment. In this study, we develop a new spatiotemporal mathematical model of CAR-T therapy for solid tumours that integrates these resistance mechanisms within a single reaction–diffusion framework. The model is formulated as a system of partial differential equations describing functional and exhausted CAR-T cells, antigen-positive and antigen-low tumour subpopulations, and chemokine, immunosuppressive, and hypoxic fields. Steady-state analysis and finite-difference simulations showed that therapeutic outcome is governed by the interplay between CAR-T cell infiltration, exhaustion, and antigen escape. The model reproduces partial tumour regression followed by residual tumour persistence, therapy-driven enrichment of antigen-low cells, and reduced efficacy under stronger immunosuppressive and hypoxic conditions. In the combination therapy scenario considered here, repeated simulated CAR-T cell administration together with attenuation of the suppressive microenvironment improves tumour control. The proposed model provides a mechanistic basis for analysing resistance and for future optimisation studies of CAR-T therapy in solid tumours. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

17 pages, 359 KB  
Article
Python-Assisted Development of High-Performance Fortran Codes: A Hybrid Methodology Integrating Symbolic Mathematics and Large Language Models
by Daniil Tolmachev and Roman Chertovskih
Computation 2026, 14(4), 86; https://doi.org/10.3390/computation14040086 - 6 Apr 2026
Viewed by 662
Abstract
The development of high-performance Fortran code for large-scale scientific simulations is inherently challenging: direct Fortran implementation demands substantial expertise in numerical methods, optimization and system architecture. Manual derivation of numerical schemes is error-prone and time-consuming. This paper advocates a four-stage development methodology involving [...] Read more.
The development of high-performance Fortran code for large-scale scientific simulations is inherently challenging: direct Fortran implementation demands substantial expertise in numerical methods, optimization and system architecture. Manual derivation of numerical schemes is error-prone and time-consuming. This paper advocates a four-stage development methodology involving Python prototyping and symbolic derivation. Systematic validation at each step of incremental transition from symbolic specification to Fortran code produces numerically correct maintainable code faster than by direct manual implementation without sacrificing the resultant performance or code quality. Large Language Models effectively accelerate Python prototyping and boilerplate generation but require rigorous verification of the generated Fortran code. We suggest practical implementation guidelines including validation strategies. Python prototyping and symbolic code generation provide effective instruments for developing efficient production-ready Fortran implementations. Full article
Show Figures

Graphical abstract

20 pages, 12202 KB  
Article
Computational Assessment of Shear Stress-Driven Flow Alterations at the Renal Artery Origin Under Varying Pressure Conditions
by Gowrava Shenoy Beloor, Raghuvir Pai Ballambat, Kevin Amith Mathias, Mohammad Zuber, Manjunath Mallashetty Shivamallaiah, Ravindra Prabhu Attur, Dharshan Rangaswamy, Prakashini Koteshwar, Masaaki Tamagawa and Shah Mohammed Abdul Khader
Computation 2026, 14(4), 85; https://doi.org/10.3390/computation14040085 - 3 Apr 2026
Viewed by 489
Abstract
The use of computational fluid dynamics (CFD) to study hemodynamics in arteries offers significant potential for addressing complex flow problems. Due to its enhanced performance hardware and software, CFD has become an important approach for studying hemodynamics in human arteries. This approach is [...] Read more.
The use of computational fluid dynamics (CFD) to study hemodynamics in arteries offers significant potential for addressing complex flow problems. Due to its enhanced performance hardware and software, CFD has become an important approach for studying hemodynamics in human arteries. This approach is utilized to investigate hemodynamics and forecast risk factors for atherosclerotic lesion development and progression, including circulatory flow, and to analyze local flow fields and flow profiles resulting from geometric changes. This foundational study will aid in analyzing blood flow behavior through the abdominal aorta and the origin and courses of renal arteries, as well as investigating the causes of disorders such as atherosclerosis and hypertension. The current study investigates three idealized abdominal aorta–renal artery junction models under varying blood pressure settings. Materialise software V19 was used to extract the geometry data to create idealized 3D abdominal aorta–renal branching models. Unsteady flow simulations were performed in ANSYS Fluent, utilizing rigid walls and Newtonian and Carreau–Yasuda viscosity conditions. Oscillatory shear index (OSI) and Time-averaged wall shear stress (TAWSS) were measured to enhance understanding of atherosclerotic plaque formation and progression. Also, the effect of geometric change at the bifurcation area was explored, and it was discovered that this location causes considerable vortex forming zones. The evident velocity reduction and backflow development were seen, reducing shear stress. The findings indicate that low TAWSS < 0.4 Pa and OSI > 0.15 areas within the bifurcation region are more susceptible to atherosclerosis development. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

15 pages, 789 KB  
Article
EdgeRescue: Lightweight AI-Based Self-Healing for Energy-Constrained IoT Meshes
by Haifa A. Alanazi, Abdulaziz G. Alanazi and Nasser S. Albalawi
Computation 2026, 14(4), 84; https://doi.org/10.3390/computation14040084 - 3 Apr 2026
Viewed by 507
Abstract
As the scale and complexity of Internet of Things (IoT) deployments increase, maintaining resilience in resource-constrained mesh networks becomes a significant challenge. Frequent node failures due to battery depletion, environmental interference, or hardware degradation can disrupt data flows and lead to operational downtime. [...] Read more.
As the scale and complexity of Internet of Things (IoT) deployments increase, maintaining resilience in resource-constrained mesh networks becomes a significant challenge. Frequent node failures due to battery depletion, environmental interference, or hardware degradation can disrupt data flows and lead to operational downtime. To address this, we propose EdgeRescue, a novel lightweight AI-driven framework for self-healing in energy-constrained IoT mesh environments. EdgeRescue enables each node to perform local anomaly detection using compact 1D Convolutional Neural Networks (1D-CNNs) and initiates distributed, energy-aware routing reconfiguration when faults are detected. Unlike cloud-dependent methods, EdgeRescue operates entirely at the edge, requiring minimal computation, memory, and communication overhead. Extensive simulations on a 100-node testbed demonstrate that EdgeRescue improves packet delivery by 13.2%, reduces recovery latency by 57%, and lowers average node energy consumption by 18.8% compared to state-of-the-art baselines. These results establish EdgeRescue as a scalable and practical solution for achieving real-time resilience in next-generation IoT mesh networks. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

20 pages, 8593 KB  
Article
Advanced Computational Investigation of Brush Seal Thermo-Fluid–Mechanical Performance Through Novel Porous Media Coefficient Derivation
by Altyib Abdallah Mahmoud Ahmed, Juan Wang, Meihong Liu, Aboubaker I. B. Idriss and Abdelgalal O. I. Abaker
Computation 2026, 14(4), 83; https://doi.org/10.3390/computation14040083 - 1 Apr 2026
Viewed by 585
Abstract
Brush seals represent the most effective sealing technology, offering 5 to 10 times lower leakage flow rates, resulting in an 80% to 90% increase in sealing efficiency. However, key challenges remain in optimizing brush seal performance, including managing high frictional heat, maintaining consistent [...] Read more.
Brush seals represent the most effective sealing technology, offering 5 to 10 times lower leakage flow rates, resulting in an 80% to 90% increase in sealing efficiency. However, key challenges remain in optimizing brush seal performance, including managing high frictional heat, maintaining consistent leakage flow, and preventing mechanical deformation failures within the bristle pack. This study uses a fluid–mechanical coupling method to establish and refine numerical investigation procedures. Using porous media and local thermal non-equilibrium (LTNE) approaches, the effects of the pressure ratio on seal performance are analyzed. The results reveal that the difference between the maximum directional and total deformations is 0.9108 mm, with the total deformation being approximately 79,666% larger than the directional deformation. These findings highlight that the bristle pack must be designed with primary consideration of total deformation to enhance performance and efficiency. The proposed methodologies enable more robust comparative evaluations of alternative brush seal configurations, including two-stage bristle packs and inline structural models. This facilitates the identification of optimized structures that minimize leakage, enhance energy dissipation, and improve the overall seal performance, thereby advancing the porous media model from a general approximation to a design-optimized tool. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

19 pages, 1462 KB  
Article
Heterogeneous Layout-Aware Cross-Modal Knowledge Point Classification for Exam Questions
by Zhushun Su, Bi Zeng, Pengfei Wei, Keyun Wang and Zhentao Lin
Computation 2026, 14(4), 82; https://doi.org/10.3390/computation14040082 - 1 Apr 2026
Viewed by 289
Abstract
With the continuous emergence of exam question types, accurate classification of knowledge points is crucial for intelligent exam analysis. Existing methods focus on text or text–image fusion but largely ignore spatial layout. To address this limitation, we propose a heterogeneous layout-aware cross-modal framework [...] Read more.
With the continuous emergence of exam question types, accurate classification of knowledge points is crucial for intelligent exam analysis. Existing methods focus on text or text–image fusion but largely ignore spatial layout. To address this limitation, we propose a heterogeneous layout-aware cross-modal framework for knowledge point classification. The architecture begins with an encoding module where independent text and layout encoders extract semantic content and spatial configurations, respectively. We then design a layout-aware enhancing module consisting of two parallel cross-modal blocks, namely a Layout-Aware Text-Enhancing block and a Context-Aware Layout-Enhancing block. This module supports the bidirectional fusion of text and layout features and generates a comprehensive representation that integrates both semantic and spatial information. Furthermore, a dynamic router with top-k expert selection is introduced to dynamically adapt to question-specific knowledge distributions and focus on core knowledge points for precise classification. Experimental results demonstrate that our method effectively integrates text and layout information, significantly enhancing performance on the proposed QType-EDU dataset. The approach achieves 91.56% accuracy for coarse-grained classification and 80.58% for fine-grained classification, with an overall F1-score of 91.39%, surpassing all baseline models. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

24 pages, 1074 KB  
Article
XGBoost vs. LightGBM: An XAI Approach to National Vehicle Fleet Analysis
by Wilson Gustavo Chango-Sailema, Homero Velasteguí-Izurieta, William Paul Pazuña-Naranjo, Joffre Stalin Monar, Rebeca Mariana Moposita-Lasso, Santiago Israel Logroño-Naranjo, Carlos Roberto López-Paredes, Jacqueline Elizabeth Ponce, Geovanny Euclides Silva-Peñafiel, Angel Patricio Flores-Orozco, Cindy Johanna Choez-Calderón and Marcelo Vladimir Garcia
Computation 2026, 14(4), 81; https://doi.org/10.3390/computation14040081 - 1 Apr 2026
Viewed by 855
Abstract
This study analyzes the factors associated with vehicle technology classification in Ecuador, using fuel category (electric, hybrid, and internal combustion) as the dependent variable under an Explainable Artificial Intelligence (XAI) approach. Following the CRISP-DM methodology, we compared the performance of XGBoost and LightGBM [...] Read more.
This study analyzes the factors associated with vehicle technology classification in Ecuador, using fuel category (electric, hybrid, and internal combustion) as the dependent variable under an Explainable Artificial Intelligence (XAI) approach. Following the CRISP-DM methodology, we compared the performance of XGBoost and LightGBM algorithms using a dataset of 482,754 administrative records from the Internal Revenue Service (SRI). Both models achieved outstanding predictive performance with a Macro F1-score of 0.987, demonstrating robustness despite the severe class imbalance (electric vehicles represent only 1.3% of the total). The integration of SHAP (SHapley Additive exPlanations) values identified tax appraisal and engine displacement as the most influential features in the model predictions in the adoption of electric vehicles. In contrast, territorial factors exert a more significant influence on the acquisition of hybrid vehicles. Finally, the findings demonstrate that boosting models, combined with XAI techniques, provide transparent analytical tools that can support evidence-based transport decarbonization strategies in emerging economies. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

15 pages, 262 KB  
Article
Evaluating Psychometric Clustering Methods: A Machine-Learning Comparison of EFA and NCD
by Jingyang Li and Zhenqiu (Laura) Lu
Computation 2026, 14(4), 80; https://doi.org/10.3390/computation14040080 - 31 Mar 2026
Viewed by 471
Abstract
Classification methods such as exploratory factor analysis (EFA) and network community detection (NCD) are widely used to identify latent item groupings in multidimensional psychological assessments. However, direct comparisons between these approaches remain limited. In addition, evaluations of clustering methods often rely on overall [...] Read more.
Classification methods such as exploratory factor analysis (EFA) and network community detection (NCD) are widely used to identify latent item groupings in multidimensional psychological assessments. However, direct comparisons between these approaches remain limited. In addition, evaluations of clustering methods often rely on overall classification metrics, which may obscure systematic differences in how well distinct types of items are recovered. Item characteristics—such as core–peripheral positions and loading patterns—may influence classification outcomes, yet few studies have examined how these item types interact with clustering methods. The present study addresses these gaps by comparing EFA and NCD within a unified machine-learning evaluation framework that varies sample size, latent structure, preprocessing strategy, and machine-learning classifier choice (Random Forests vs. Support Vector Machines). Results show that the performance of both EFA and NCD is influenced by sample size, item type, latent structure, and classifier choice. Moreover, the downstream classifier moderates how sensitive each method is to differences among item types. These findings highlight the importance of considering item-type heterogeneity when evaluating clustering methods and demonstrate the value of machine-learning-based frameworks for advancing psychometric classification approaches. Full article
20 pages, 7575 KB  
Article
Heat Transfer Mixing in Closed Domain with Circular and Elliptical Cross-Sections
by Myriam E. Bruno, Alessandro Nobile and Paolo Oresta
Computation 2026, 14(4), 79; https://doi.org/10.3390/computation14040079 - 31 Mar 2026
Viewed by 421
Abstract
Rayleigh–Bénard convection (RBC) provides a benchmark for studying buoyancy-driven instabilities and heat transport in confined fluids. Heat transfer scaling in cylindrical geometries is well established, whereas the role of the anisotropy induced by the domain geometry, such as elliptical shapes, has not fully [...] Read more.
Rayleigh–Bénard convection (RBC) provides a benchmark for studying buoyancy-driven instabilities and heat transport in confined fluids. Heat transfer scaling in cylindrical geometries is well established, whereas the role of the anisotropy induced by the domain geometry, such as elliptical shapes, has not fully explored. This study presents direct numerical simulations of RBC in two domains of equal height, H=0.0124 m, and different cross-sections: a circular cylinder with radius R=3.11×103 m and an elliptical cylinder with semi-axes equal to Rmax=3.11×103 m, Rmin=1.55×103 m, respectively. The simulations, performed at Rayleigh number Ra=2×106 and Prandtl number Pr=1.68 (for water) under the Boussinesq approximation, reveal that (i) the average Nusselt number is comparable in both cases (Nu38.23 for the circular case and Nu39.22 for the elliptical one) and (ii) the different domain geometries influence the thermal transport mechanism and flow organization. Specifically, in the cylindrical cell, heat transfer is regulated by a large-scale circulation roll, whereas in the case of the elliptical shape, the domain is populated by thermal plumes driving the convective dynamics. The latter phenomenon is evidenced by larger Nusselt number fluctuations at the lower and upper plates, with a standard deviation increasing from σ2.21 in the circular cylinder to σ4.57 in the elliptical domain. These results highlight that the geometric anisotropy modifies the coupling between boundary layers and the core flow dynamics, leading to enhanced intermittency without affecting the magnitude of the heat flux. Therefore, the elliptical domain is suitable for applications characterized by enhanced mixing. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

27 pages, 2137 KB  
Article
Multiregional Forecasting of Traffic Accidents Using Prophet Models with Statistical Residual Validation
by Jaime Sayago-Heredia, Tatiana Elizabeth Landivar, Roberto Vásconez and Wilson Chango-Sailema
Computation 2026, 14(4), 78; https://doi.org/10.3390/computation14040078 - 26 Mar 2026
Viewed by 474
Abstract
This study develops a multiregional forecasting framework for road traffic accidents in Ecuador, addressing a critical limitation in existing predictive approaches that rely predominantly on point error metrics without validating the statistical assumptions underlying forecast uncertainty. Although the analysis is conducted at the [...] Read more.
This study develops a multiregional forecasting framework for road traffic accidents in Ecuador, addressing a critical limitation in existing predictive approaches that rely predominantly on point error metrics without validating the statistical assumptions underlying forecast uncertainty. Although the analysis is conducted at the provincial level, the spatial dimension is used primarily for cross-regional comparison and risk classification rather than for explicit spatial interaction modeling. Using a dataset of 27,648 monthly observations covering all 24 provinces from 2014 to 2025, the study applies the Prophet model within a Design Science Research paradigm and a CRISP-DM implementation cycle. Separate provincial models are estimated with a 24-month forecasting horizon, and methodological rigor is ensured through systematic residual diagnostics using the Shapiro–Wilk test for normality and the Ljung–Box test for temporal independence. Empirical results indicate that the Prophet-based artifact outperforms a naïve seasonal benchmark in 70.8% of the provinces, demonstrating excellent predictive accuracy in structurally stable regions such as Tungurahua (MAPE = 10.9%). At the same time, the framework enables the identification of critical emerging risks in provinces such as Santo Domingo and Cotopaxi, where projected increases exceed 49% despite acceptable point forecasts. The findings confirm that point accuracy alone does not guarantee the validity of confidence intervals and that residual validation is essential for trustworthy uncertainty quantification. Overall, the proposed approach provides a robust foundation for a predictive surveillance system capable of supporting differentiated, evidence-based road safety policies in territorially heterogeneous contexts. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

17 pages, 7796 KB  
Article
Patient-Specific CFD Analysis of Carotid Artery Haemodynamics: Impact of Anatomical Variations on Atherosclerotic Risk
by Abhilash Hebbandi Ningappa, S. M. Abdul Khader, Harishkumar Kamat, Masaaki Tamagawa, Ganesh Kamath, Raghuvir Pai B., Prakashini Koteswar, Irfan Anjum Badruddin, Mohammad Zuber, Kevin Amith Mathias and Gowrava Shenoy Baloor
Computation 2026, 14(4), 77; https://doi.org/10.3390/computation14040077 - 26 Mar 2026
Viewed by 723
Abstract
Understanding the hemodynamics of the carotid artery is essential for assessing atherosclerotic disease progression and identifying regions vulnerable to plaque formation. Background: Disturbed flow patterns and abnormal shear stresses, particularly near the carotid bifurcation, are known to influence endothelial dysfunction; therefore, this study [...] Read more.
Understanding the hemodynamics of the carotid artery is essential for assessing atherosclerotic disease progression and identifying regions vulnerable to plaque formation. Background: Disturbed flow patterns and abnormal shear stresses, particularly near the carotid bifurcation, are known to influence endothelial dysfunction; therefore, this study aims to quantify the impact of patient-specific carotid artery geometry on key hemodynamic parameters associated with atherosclerotic risk. Methods: Four patient-specific carotid artery geometries were reconstructed from medical imaging data, processed using MIMICS, and analyzed using computational fluid dynamics in ANSYS Fluent, with blood modeled as an incompressible non-Newtonian fluid using the Carreau–Yasuda viscosity model under pulsatile flow conditions; velocity streamlines, pressure distribution, time-averaged wall shear stress (TAWSS), and oscillatory shear index (OSI) were evaluated at early systole, peak systole, and peak diastole. Results: The simulations revealed complex flow behaviour, including flow reversal, pressure build-up, and low-shear regions concentrated near the carotid bulb and bifurcation, with TAWSS consistently identifying low-shear zones (<1 Pa) across all geometries and OSI exhibiting pronounced directional oscillations in models with increased curvature and wider bifurcation angles. Conclusions: These findings demonstrate that geometric characteristics such as bifurcation angle, vessel tortuosity, and asymmetry play a critical role in shaping local haemodynamics, underscoring the utility of patient-specific CFD analysis as a diagnostic and predictive tool for atherosclerotic risk assessment and supporting more informed, personalized clinical decision-making. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

23 pages, 782 KB  
Article
Computational Economics of Circular Construction: Machine Learning and Digital Twins for Optimizing Demolition Waste Recovery and Business Value
by Marta Torres-Polo and Eduardo Guzmán Ortíz
Computation 2026, 14(4), 76; https://doi.org/10.3390/computation14040076 - 25 Mar 2026
Viewed by 563
Abstract
Construction and demolition waste (CDW) represents a critical environmental challenge in the building sector, with global generation exceeding 3.57 billion tonnes annually. The circular economy (CE) framework offers a transformative pathway through selective deconstruction and material recovery, yet implementation faces significant barriers including [...] Read more.
Construction and demolition waste (CDW) represents a critical environmental challenge in the building sector, with global generation exceeding 3.57 billion tonnes annually. The circular economy (CE) framework offers a transformative pathway through selective deconstruction and material recovery, yet implementation faces significant barriers including information asymmetry, supply chain fragmentation, and regulatory uncertainty. This study conducts a systematic literature review using the Context–Mechanism–Outcome (CMO) framework to analyze how computational methods, specifically Digital Twins (DT), Building Information Modeling (BIM), Internet of Things (IoT), blockchain, artificial intelligence, and robotics, act as enablers for resilience in CDW management. Following PRISMA 2020 guidelines and realist synthesis principles, we analyzed 42 high-quality empirical studies from Web of Science and Scopus (2015–2025). Our analysis identifies seven primary mechanisms: traceability (M1), simulation (M2), classification (M3), tracking (M4), collaboration (M5), analytics (M6) and robotics (M7). These mechanisms interact with four critical contexts (information asymmetry, supply chain fragmentation, economic uncertainty, operational risks) to generate outcomes at two levels: resilience capabilities (visibility, monitoring, collaboration, flexibility, anticipation) and performance indicators (recovery rates, cost reduction, CO2 emissions mitigation, occupational safety). Key findings from the CMO analysis reveal that blockchain-enabled traceability increases material recovery rates by 15–25%, DT simulation reduces deconstruction costs by 20–30%, and computer vision automation improves sorting accuracy to 85–95%. The study contributes middle-range theories explaining how digital technologies enable circular transitions under specific contextual conditions, offering actionable strategic implications for researchers, project managers, technology developers, and policymakers committed to advancing computational economics in sustainable construction. Full article
Show Figures

Graphical abstract

23 pages, 1734 KB  
Article
Reinforcement-Learning-Based Optimization of Convective Fluxes for High-CFL Finite-Volume Schemes
by Andrey Rozhkov, Andrey Kozelkov, Vadim Kurulin and Maxim Shishlenin
Computation 2026, 14(4), 75; https://doi.org/10.3390/computation14040075 - 24 Mar 2026
Viewed by 344
Abstract
In this article, we explore the possibility of using reinforcement learning to create convective flow approximation schemes that maintain accuracy and stability at high Courant-Friedrichs-Lewy (CFL) numbers in the finite-volume discretization of advection equations. Unlike most existing data-driven discretization methods, which primarily concentrate [...] Read more.
In this article, we explore the possibility of using reinforcement learning to create convective flow approximation schemes that maintain accuracy and stability at high Courant-Friedrichs-Lewy (CFL) numbers in the finite-volume discretization of advection equations. Unlike most existing data-driven discretization methods, which primarily concentrate on spatial grid refinement, this work emphasizes increasing the allowable time step without compromising solution accuracy. This approach reduces the total number of time integration steps, thereby enabling faster computation. A neural network is used as a surrogate model for reconstructing the convective flow, which takes as input local information about the flow, scalars, and geometry and predicts scalar values at node points. Reinforcement learning is used for training and is formulated as a policy optimization problem, where the long-term reward is defined as the difference between the numerical and reference solutions over the entire simulation period. Both the genetic algorithm and the Deep Deterministic Policy Gradient (DDPG) method are investigated. The effectiveness of the approach is evaluated using a one-dimensional nonlinear advection problem with a constant velocity field. Despite the simplicity of the test case, the results demonstrate that the trained convective flux approximation scheme achieves accuracy comparable to or better than the classical second-order linear upwind (LUD) scheme, while operating at CFL numbers 2–50 times higher than the optimal CFL for LUD, thereby reducing the simulation time by the same factor. This allows for a wider range of stability and accuracy in the finite-volume method and the use of larger time steps without compromising the quality of the solution. The study is intentionally limited to a single spatial dimension and serves as a basic analysis of the method’s applicability. The results demonstrate that reinforcement learning can successfully find more convective flow approximation schemes that improve efficiency at high CFL numbers than conventional explicit second-order schemes, establishing a framework that is subsequently extended in our follow-up work to improve training methods and three-dimensional complex transport problems. The proposed method improves the spatial discretization of convective fluxes, which is independent of the choice of time integration scheme. Therefore, the neural reconstruction can in principle be used in both explicit and implicit finite-volume solvers. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop