Topical Advisory Panel applications are now closed. Please contact the Editorial Office with any queries.
Journal Description
Computation
Computation
is a peer-reviewed journal of computational science and engineering published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), CAPlus / SciFinder, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q2 (Mathematics, Interdisciplinary Applications) / CiteScore - Q1 (Applied Mathematics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 14.8 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Mathematics and Its Applications: AppliedMath, Axioms, Computation, Fractal and Fractional, Geometry, International Journal of Topology, Logics, Mathematics and Symmetry.
Impact Factor:
1.9 (2024);
5-Year Impact Factor:
1.9 (2024)
Latest Articles
Extending Q-Learning for Economic Modelling: A Design Framework with Equilibrium Benchmarks
Computation 2026, 14(2), 50; https://doi.org/10.3390/computation14020050 (registering DOI) - 14 Feb 2026
Abstract
This paper proposes a methodological architecture to integrate Q-learning into economic modelling systematically. It addresses a common gap: the lack of a shared framework linking economic foundations to Reinforcement Learning components. Rather than introducing a new algorithm, it specifies and reports how preferences,
[...] Read more.
This paper proposes a methodological architecture to integrate Q-learning into economic modelling systematically. It addresses a common gap: the lack of a shared framework linking economic foundations to Reinforcement Learning components. Rather than introducing a new algorithm, it specifies and reports how preferences, frictions, information structures, and time horizons map to the reward function, discount factor, and learning environment design. Equilibrium outcomes serve as benchmarks for comparing learned policies, not as imposed axioms. This approach interprets learning dynamics through standard economic categories and enables comparability across studies. The architecture organizes models along explicit dimensions: behavioural preferences, institutional frictions, economic environment class, information structure, learning and exploration mechanisms, and evaluation metrics. A simulation illustrates how variations in frictions, risk attitudes, and intertemporal preferences affect learned policies, their stability, and their relationship to static benchmarks. The paper aims to promote the cumulative use of Reinforcement Learning in applied economics by providing a general specification that improves interpretability, comparability, and reproducibility, turning deviations from theoretical equilibria into measurable diagnostics that refine economic fundamentals.
Full article
(This article belongs to the Special Issue Modern Applications for Computational Methods in Applied Economics and Business Engineering)
►
Show Figures
Open AccessArticle
ECG Heartbeat Classification Using Echo State Networks with Noisy Reservoirs and Variable Activation Function
by
Ioannis P. Antoniades, Anastasios N. Tsiftsis, Christos K. Volos, Andreas D. Tsigopoulos, Konstantia G. Kyritsi and Hector E. Nistazakis
Computation 2026, 14(2), 49; https://doi.org/10.3390/computation14020049 - 13 Feb 2026
Abstract
In this work, we use an Echo State Network (ESN) model, which is essentially a recurrent neural network (RNN) operating according to the reservoir computing (RC) paradigm, to classify individual ECG heartbeats using the MIT-BIH arrhythmia database. The aim is to evaluate the
[...] Read more.
In this work, we use an Echo State Network (ESN) model, which is essentially a recurrent neural network (RNN) operating according to the reservoir computing (RC) paradigm, to classify individual ECG heartbeats using the MIT-BIH arrhythmia database. The aim is to evaluate the performance of ESN in a challenging task that involves classification of complex, unprocessed one-dimensional signals, distributed into five classes. Moreover, we investigate the performance of the ESN in the presence of (i) noise in the dynamics of the internal variables of the hidden (reservoir) layer and (ii) random variability in the activation functions of the hidden layer cells (neurons). The overall accuracy of the best-performing ESN, without noise and variability, exceeded 96% with per-class accuracies ranging from 90.2% to 99.1%, which is higher than previous studies using CNNs and more complex machine learning approaches. The top-performing ESN required only 40 min of training on a CPU (Intel i5-1235U@1.3 GHz) HP laptop. Notably, an alternative ESN configuration that matched the accuracy of a prior CNN-based study (93.4%) required only 6 min of training, whereas a CNN would typically require an estimated training time of 2–3 days. Surprisingly, ESN performance proved to be very robust when Gaussian noise was added to the dynamics of the reservoir hidden variables, even for high noise amplitudes. Moreover, the success rates remained essentially the same when random variability was imposed in the activation functions of the hidden layer cells. The stability of ESN performance under noisy conditions and random variability in the hidden layer (reservoir) cells demonstrates the potential of analog hardware implementations of ESNs to be robust in time-series classification tasks.
Full article
(This article belongs to the Special Issue Experiments/Process/System Modeling/Simulation/Optimization (IC-EPSMSO 2025))
►▼
Show Figures

Figure 1
Open AccessArticle
Multi-Level Parallel CPU Execution Method for Accelerated Portion-Based Variant Call Format Data Processing
by
Lesia Mochurad, Ivan Tsmots, Vita Mostova and Karina Kystsiv
Computation 2026, 14(2), 48; https://doi.org/10.3390/computation14020048 - 8 Feb 2026
Abstract
This paper proposes and experimentally evaluates a multi-level CPU-oriented execution method for high-throughput portion-based processing of file-backed Variant Call Format (VCF) data and automated mutation classification. The approach is based on a formally defined local processing scheme and integrates three coordinated levels of
[...] Read more.
This paper proposes and experimentally evaluates a multi-level CPU-oriented execution method for high-throughput portion-based processing of file-backed Variant Call Format (VCF) data and automated mutation classification. The approach is based on a formally defined local processing scheme and integrates three coordinated levels of parallelism: block-based partitioning of file-backed VCF portions read sequentially into localized fragments with data-level parallel processing; task-level decomposition of feature construction into independent transformations; and execution-level specialization via JIT compilation of numerical kernels. To prevent performance degradation caused by nested parallelism, a resource-control mechanism is introduced as an execution rule that bounds effective parallelism and mitigates oversubscription, improving throughput stability on a single multi-core CPU node. Experiments on a public chromosome-17 VCF dataset for BRCA1-region pathogenicity classification demonstrate that the proposed multi-level local CPU execution (parsing/filtering, feature construction, and JIT-specialized numeric kernels) reduces runtime from 291.25 s (sequential) to 73.82 s, yielding a 3.95× speedup. When combined with resource-coordinated parallel model training, the end-to-end runtime further decreases to 51.18 s, corresponding to a 5.69× speedup, while preserving classification quality (accuracy 0.8483, precision 0.8758, recall 0.8261, F1 0.8502). A stage-wise ablation analysis quantifies the contribution of each execution level and confirms consistent scaling under resource-bounded execution.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
SOH- and Temperature-Aware Adaptive SOC Boundaries for Second-Life Li-Ion Batteries in Off-Grid PV–BESSs
by
Hongyan Wang, Atthapol Ngaopitakkul and Suntiti Yoomak
Computation 2026, 14(2), 47; https://doi.org/10.3390/computation14020047 - 7 Feb 2026
Abstract
In this study, an adaptive state-of-charge (SOC) boundary strategy (ASBS) is proposed that dynamically adjusts the admissible upper and lower SOC limits of second-life lithium-ion batteries in off-grid photovoltaic battery energy storage systems (PV-BESSs) based on real-time state of health (SOH) and temperature
[...] Read more.
In this study, an adaptive state-of-charge (SOC) boundary strategy (ASBS) is proposed that dynamically adjusts the admissible upper and lower SOC limits of second-life lithium-ion batteries in off-grid photovoltaic battery energy storage systems (PV-BESSs) based on real-time state of health (SOH) and temperature feedback. The strategy is formulated using a unified electrical–thermal–aging model with an online state estimator and ensures both electrical safety and power feasibility while remaining fully compatible with standard energy management functions. Two representative simulations—a single-day operating profile and a continuous thirty-day sequence—demonstrate the effectiveness of the ASBS. In the twenty-four-hour case, the duration spent in high state-of-charge conditions is reduced by approximately 0.30–0.50 h, the abrupt end-of-charging transition is eliminated, and the temperature rise is slightly moderated, all without any loss of energy supply. Over thirty days, the difference between the ASBS and a fixed state-of-charge window remains effectively zero for almost all hours, with only a brief midday deviation of −4 to −5 percentage points and no cumulative drift. Indicators of electrical and thermal stress improve substantially, including an approximate 70% reduction in the root mean square charging current. These results confirm that the ASBS provides a practical and non-intrusive means of mitigating stress on second-life lithium-ion batteries while preserving full energy autonomy in off-grid photovoltaic systems.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Phishing Email Detection Using BERT and RoBERTa
by
Mariam Ibrahim and Ruba Elhafiz
Computation 2026, 14(2), 46; https://doi.org/10.3390/computation14020046 - 7 Feb 2026
Abstract
One of the most harmful and deceptive forms of cybercrime is phishing, which targets users with malicious emails and websites. In this paper, we focus on the use of natural language processing (NLP) techniques and transformer models for phishing email detection. The Nazario
[...] Read more.
One of the most harmful and deceptive forms of cybercrime is phishing, which targets users with malicious emails and websites. In this paper, we focus on the use of natural language processing (NLP) techniques and transformer models for phishing email detection. The Nazario Phishing Corpus is preprocessed and blended with real emails from the Enron dataset to create a robustly balanced dataset. Urgency, deceptive phrasing, and structural anomalies were some of the neglected features and sociolinguistic traits of the text, which underwent tokenization, lemmatization, and noise filtration. We fine-tuned two transformer models, Bidirectional Encoder Representations from Transformers (BERT) and the Robustly Optimized BERT Pretraining Approach (RoBERTa), for binary classification. The models were evaluated on the standard metrics of accuracy, precision, recall, and F1-score. Given the context of phishing, emphasis was placed on recall to reduce the number of phishing attacks that went unnoticed. The results show that RoBERTa has more general performance and fewer false negatives than BERT and is therefore a better candidate for deployment on security-critical tasks.
Full article
(This article belongs to the Special Issue Sentiment-Driven Modelling in Business, Economics, and Social Sciences)
Open AccessArticle
An Enhanced Projection-Iterative-Methods-Based Optimizer for Complex Constrained Engineering Design Problems
by
Xuemei Zhu, Han Peng, Haoyu Cai, Yu Liu, Shirong Li and Wei Peng
Computation 2026, 14(2), 45; https://doi.org/10.3390/computation14020045 - 6 Feb 2026
Abstract
This paper proposes an Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) to overcome the limitations of its predecessor, the Projection-Iterative-Methods-based Optimizer (PIMO), including deterministic parameter decay, insufficient diversity maintenance, and static exploration–exploitation balance. The enhancements incorporate three core strategies: (1) an adaptive decay strategy that introduces
[...] Read more.
This paper proposes an Enhanced Projection-Iterative-Methods-based Optimizer (EPIMO) to overcome the limitations of its predecessor, the Projection-Iterative-Methods-based Optimizer (PIMO), including deterministic parameter decay, insufficient diversity maintenance, and static exploration–exploitation balance. The enhancements incorporate three core strategies: (1) an adaptive decay strategy that introduces stochastic perturbations into the step-size evolution; (2) a mirror opposition-based learning strategy to actively inject structured population diversity; and (3) an adaptive adjustment mechanism for the Lévy flight parameter to enable phase-sensitive optimization behavior. The effectiveness of EPIMO is validated through a multi-stage experimental framework. Systematic evaluations on the CEC 2017 and CEC 2022 benchmark suites, alongside four classical engineering optimization problems (Himmelblau function, step-cone pulley design, hydrostatic thrust bearing design, and three-bar truss design), demonstrate its comprehensive superiority. The Wilcoxon rank-sum test confirms statistically significant performance improvements over its predecessor (PIMO) and a range of state-of-the-art and classical algorithms. EPIMO exhibits exceptional performance in convergence accuracy, stability, robustness, and constraint-handling capability, establishing it as a highly reliable and efficient metaheuristic optimizer. This research contributes a systematic, adaptive enhancement framework for projection-based metaheuristics, which can be generalized to improve other swarm intelligence systems when facing complex, constrained, and high-dimensional engineering optimization tasks.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessEditorial
Nonlinear System Modelling and Control: Trends, Challenges, and Future Perspectives
by
Chathura Wanigasekara
Computation 2026, 14(2), 44; https://doi.org/10.3390/computation14020044 - 3 Feb 2026
Abstract
Nonlinear systems engineering has undergone a profound transformation with the rapid development of computational tools and advanced analytical methods [...]
Full article
(This article belongs to the Special Issue Nonlinear System Modelling and Control)
Open AccessArticle
Methodology for Predicting Geochemical Anomalies Using Preprocessing of Input Geological Data and Dual Application of a Multilayer Perceptron
by
Daulet Akhmedov, Baurzhan Bekmukhamedov, Moldir Tanashova and Zulfiya Seitmuratova
Computation 2026, 14(2), 43; https://doi.org/10.3390/computation14020043 - 3 Feb 2026
Abstract
The increasing need for accurate prediction of geochemical anomalies requires methods capable of capturing complex spatial patterns that traditional approaches often fail to represent adequately. For N datasets of the form (Xi,Yi) representing the geographic coordinates of
[...] Read more.
The increasing need for accurate prediction of geochemical anomalies requires methods capable of capturing complex spatial patterns that traditional approaches often fail to represent adequately. For N datasets of the form (Xi,Yi) representing the geographic coordinates of sampling points and Ci denoting the geochemical measurement, training multilayer perceptrons (MLPs) presents a challenge. The low informativeness of the input features and their weak correlation with the target variable result in excessively simplified predictions. Analysis of a baseline model trained only on geographic coordinates showed that, while the loss function converges rapidly, the resulting values become overly “compressed” and fail to reflect the actual concentration range. To address this, a preprocessing method based on anisotropy was developed to enhance the correlation between input and output variables. This approach constructs, for each prediction point, a structured informational model that incorporates the direction and magnitude of spatial variability through sectoral and radial partitioning of the nearest sampling data. The transformed features are then used in a dual-MLP architecture, where the first network produces sectoral estimates, and the second aggregates them into the final prediction. The results show that anisotropic feature transformation significantly improves neural network prediction capabilities in geochemical analysis.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Information Inequalities for Five Random Variables
by
Laszlo Csirmaz and Elod P. Csirmaz
Computation 2026, 14(2), 42; https://doi.org/10.3390/computation14020042 - 2 Feb 2026
Abstract
The entropic region is formed by the collection of the Shannon entropies of all subvectors of finitely many jointly distributed discrete random variables. For four or more variables, the structure of the entropic region is mostly unknown. We utilize a variant of the
[...] Read more.
The entropic region is formed by the collection of the Shannon entropies of all subvectors of finitely many jointly distributed discrete random variables. For four or more variables, the structure of the entropic region is mostly unknown. We utilize a variant of the Maximum Entropy Method to obtain five-variable non-Shannon entropy inequalities, which delimit the five-variable entropy region. This method adds copies of some of the random variables in generations. A significant reduction in computational complexity, achieved through theoretical considerations and by harnessing the inherent symmetries, allowed us to calculate all five-variable non-Shannon inequalities provided by the first nine generations. Based on the results, we define two infinite collections of such inequalities and prove them to be entropy inequalities. We investigate downward-closed subsets of non-negative lattice points that parameterize these collections, and based on this, we develop an algorithm to enumerate all extremal inequalities. The discovered set of entropy inequalities is conjectured to characterize the applied method completely.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Modelling of Batch Fermentation Processes of Ethanol Production by Kluyveromyces marxianus
by
Olympia Roeva, Anastasiya Zlatkova, Velislava Lyubenova, Maya Ignatova, Denitsa Kristeva, Gergana Roeva and Dafina Zoteva
Computation 2026, 14(2), 41; https://doi.org/10.3390/computation14020041 - 2 Feb 2026
Abstract
A representative cluster-based model of the batch process of ethanol production by Kluyveromyces sp. is proposed. Experimental data from fermentation processes of 17 different strains of K. marxianus are used; each of them potentially exhibits different metabolic and kinetic behavior. Three algorithms for
[...] Read more.
A representative cluster-based model of the batch process of ethanol production by Kluyveromyces sp. is proposed. Experimental data from fermentation processes of 17 different strains of K. marxianus are used; each of them potentially exhibits different metabolic and kinetic behavior. Three algorithms for clustering are applied. Two modifications of Principal Component Analysis (PCA)—hierarchical clustering and k-means clustering; and InterCriteria Analysis (ICrA) are used to simplify a large dataset into a smaller set while preserving as much information as possible. The experimental data are organized into two main clusters. As a result, the most representative fermentation processes are identified. For each of the fermentation processes in the clusters, structural and parameter identification are performed. Four different structures describing the specific substrate (glucose) consumption rate are applied. The best structure is used to derive the representative model using the data from the first cluster. Verification of the derived model is performed using experimental data of the second cluster. Model parameter identification is performed by applying an evolutionary optimization algorithm.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Figure 1
Open AccessArticle
Can Generative AI Co-Evolve with Human Guidance and Display Non-Utilitarian Moral Behavior?
by
Rafael Lahoz-Beltra
Computation 2026, 14(2), 40; https://doi.org/10.3390/computation14020040 - 2 Feb 2026
Abstract
The growing presence of autonomous AI systems, such as self-driving cars and humanoid robots, raises critical ethical questions about how these technologies should make moral decisions. Most existing moral machine (MM) models rely on secular, utilitarian principles, which prioritize the greatest good for
[...] Read more.
The growing presence of autonomous AI systems, such as self-driving cars and humanoid robots, raises critical ethical questions about how these technologies should make moral decisions. Most existing moral machine (MM) models rely on secular, utilitarian principles, which prioritize the greatest good for the greatest number but often overlook the religious and cultural values that shape moral reasoning across different traditions. This paper explores how theological perspectives, particularly those from Christian, Islamic, and East Asian ethical frameworks, can inform and enrich algorithmic ethics in autonomous systems. By integrating these religious values, the study proposes a more inclusive approach to AI decision making that respects diverse beliefs. A key innovation of this research is the use of large language models (LLMs), such as ChatGPT (GPT-5.2), to design with human guidance MM architectures that incorporate these ethical systems. Through Python 3 scripts, the paper demonstrates how autonomous machines, e.g., vehicles and humanoid robots, can make ethically informed decisions based on different religious principles. The aim is to contribute to the development of AI systems that are not only technologically advanced but also culturally sensitive and ethically responsible, ensuring that they align with a wide range of theological values in morally complex situations.
Full article
(This article belongs to the Section Computational Social Science)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Semi-Empirical Estimation of Aerosol Particle Influence at the Performance of Terrestrial FSO Links over the Sea
by
Argyris N. Stassinakis, Efstratios V. Chatzikontis, Kyle R. Drexler, Andreas D. Tsigopoulos, Gratchia Mkrttchian and Hector E. Nistazakis
Computation 2026, 14(2), 39; https://doi.org/10.3390/computation14020039 - 2 Feb 2026
Abstract
Free-space optical (FSO) communication enables high-bandwidth license-free data transmission and is particularly attractive for maritime point-to-point links. However, FSO performance is strongly affected by atmospheric conditions. This work presents a semi-empirical model quantifying the impact of fine particulate matter (PM2.5) on received optical
[...] Read more.
Free-space optical (FSO) communication enables high-bandwidth license-free data transmission and is particularly attractive for maritime point-to-point links. However, FSO performance is strongly affected by atmospheric conditions. This work presents a semi-empirical model quantifying the impact of fine particulate matter (PM2.5) on received optical power in a maritime FSO link. The model is derived from long-term experimental measurements collected over a 2.96 km horizontal optical path above the sea surface, combining received signal strength indicator (RSSI) data with co-located PM2.5 observations. Statistical analysis reveals a strong negative correlation between PM2.5 concentration and received optical power (Pearson coefficient −0.748). Using a logarithmic attenuation formulation, the PM2.5-induced attenuation is estimated to increase by approximately 0.0026 dB/km per µg/m3 of PM2.5 concentration. A second-order semi-empirical model captures the observed nonlinear attenuation behavior with a coefficient of determination of R2 = 0.57. The proposed model provides a practical tool for link budgeting, performance forecasting, and adaptive design of maritime FSO systems operating in aerosol-rich environments.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Development of a Dashboard for Simulation Workflow Visualization and Optimization of an Ammonia Synthesis Reactor in the HySTrAm Project (Horizon EU)
by
Eleni Douvi, Dimitra Douvi, Jason Tsahalis and Haralabos-Theodoros Tsahalis
Computation 2026, 14(2), 38; https://doi.org/10.3390/computation14020038 - 2 Feb 2026
Abstract
Although hydrogen plays a crucial role in the EU’s strategy to reduce greenhouse gas emissions, its storage and transport are technically challenging. If ammonia is produced efficiently, it can be a promising hydrogen carrier, especially in decentralized and flexible conditions. The Horizon EU
[...] Read more.
Although hydrogen plays a crucial role in the EU’s strategy to reduce greenhouse gas emissions, its storage and transport are technically challenging. If ammonia is produced efficiently, it can be a promising hydrogen carrier, especially in decentralized and flexible conditions. The Horizon EU HySTrAm project addresses this problem by developing a small-scale, containerized demonstration plant consisting of (1) a short-term hydrogen storage container using novel ultraporous materials optimized through machine learning, and (2) an ammonia synthesis reactor based on an improved low-pressure Haber–Bosch process. This paper presents an initial version of a Python (v3.9)-based dashboard designed to visualize and optimize the simulation workflow of the ammonia synthesis process. Designed as a baseline for a future online, automated tool, the dashboard allows the comparison of three reactor configurations already defined through simulations and aligned with the upcoming experimental campaign: single tube, two reactors in parallel swing mode and two reactors in series. Pressures at the inlet/outlet, temperatures across the reactor, operation recipe and ammonia production over time are displayed dynamically to evaluate the performance of the reactor. Future versions will include optimization features, such as the identification of optimal operating modes, the reduction of production time, an increase of productivity, and catalyst degradation estimation.
Full article
(This article belongs to the Special Issue Experiments/Process/System Modeling/Simulation/Optimization (IC-EPSMSO 2025))
►▼
Show Figures

Figure 1
Open AccessArticle
LocRes–PINN: A Physics–Informed Neural Network with Local Awareness and Residual Learning
by
Tangying Lv, Wenming Yin, Hengkai Yao, Qingliang Liu, Yitong Sun, Kuan Zhao and Shanliang Zhu
Computation 2026, 14(2), 37; https://doi.org/10.3390/computation14020037 - 2 Feb 2026
Abstract
Physics–Informed Neural Networks (PINNs) have demonstrated efficacy in solving both forward and inverse problems for nonlinear partial differential equations (PDEs). However, they frequently struggle to accurately capture multiscale physical features, particularly in regions exhibiting sharp local variations such as shock waves and discontinuities,
[...] Read more.
Physics–Informed Neural Networks (PINNs) have demonstrated efficacy in solving both forward and inverse problems for nonlinear partial differential equations (PDEs). However, they frequently struggle to accurately capture multiscale physical features, particularly in regions exhibiting sharp local variations such as shock waves and discontinuities, and often suffer from optimization difficulties in complex loss landscapes. To address these issues, we propose LocRes–PINN, a physics–informed neural network framework that integrates local awareness mechanisms with residual learning. This framework integrates a radial basis function (RBF) encoder to enhance the perception of local variations and embeds it within a residual backbone to facilitate stable gradient propagation. Furthermore, we incorporate a residual–based adaptive refinement strategy and an adaptive weighted loss scheme to dynamically focus training on high–error regions and balance multi–objective constraints. Numerical experiments on the Extended Korteweg–de Vries, Navier–Stokes, and Burgers equations demonstrate that LocRes–PINN reduces relative prediction errors by approximately 12% to 67% compared to standard benchmarks. The results also verify the model’s robustness in parameter identification and noise resilience.
Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
►▼
Show Figures

Graphical abstract
Open AccessArticle
A Method for Road Spectrum Identification in Real-Vehicle Tests by Fusing Time-Frequency Domain Features
by
Biao Qiu and Chaiyan Jettanasen
Computation 2026, 14(2), 36; https://doi.org/10.3390/computation14020036 - 2 Feb 2026
Abstract
Most unpaved roads are subjectively classified as Class D roads. However, significant variations exist across different sites and environments (e.g., mining areas). A major challenge in the engineering field is how to quickly correct the Power Spectral Density (PSD) of the unpaved road
[...] Read more.
Most unpaved roads are subjectively classified as Class D roads. However, significant variations exist across different sites and environments (e.g., mining areas). A major challenge in the engineering field is how to quickly correct the Power Spectral Density (PSD) of the unpaved road in question using existing equipment and limited sensors. To address this issue, this study combines real-vehicle test data with a suspension dynamics simulation model. It employs time-domain reconstruction via Inverse Fast Fourier Transform (IFFT) and wavelet processing methods to construct an optimized model that fuses time-frequency domain features. With the help of a surrogate optimization method, the model achieves the best approximation of the actual road surface, corrects the PSD parameters of the unpaved road, and provides a reliable input basis for vehicle dynamics simulation, fatigue life prediction, and performance evaluation.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Comparison of Lagrangian and Isogeometric Boundary Element Formulations for Orthotropic Heat Conduction Problems
by
Ege Erdoğan and Barbaros Çetin
Computation 2026, 14(2), 35; https://doi.org/10.3390/computation14020035 - 2 Feb 2026
Abstract
Orthotropic materials are increasingly employed in advanced thermal systems due to their direction-dependent heat transfer characteristics. Accurate numerical modeling of heat conduction in such media remains challenging, particularly for 3D geometries with nonlinear boundary conditions and internal heat generation. In this study, conventional
[...] Read more.
Orthotropic materials are increasingly employed in advanced thermal systems due to their direction-dependent heat transfer characteristics. Accurate numerical modeling of heat conduction in such media remains challenging, particularly for 3D geometries with nonlinear boundary conditions and internal heat generation. In this study, conventional boundary element method (BEM) and isogeometric boundary element method (IGABEM) formulations are developed and compared for steady-state orthotropic heat conduction problems. A coordinate transformation is adopted to map the anisotropic governing equation onto an equivalent isotropic form, enabling the use of classical Laplace fundamental solutions. Volumetric heat generation is incorporated via the radial integration method (RIM), preserving the boundary-only discretization, while nonlinear Robin boundary conditions are treated using variable condensation and a Newton–Raphson iterative scheme. The performance of both methods is evaluated using a hollow ellipsoidal benchmark problem with available analytical solutions. The results demonstrate that IGABEM provides higher accuracy and smoother convergence than conventional BEM, particularly for higher-order discretizations, which is owing to its exact geometric representation and higher continuity. Although IGABEM involves additional computational overhead due to NURBS evaluations, both methods exhibit similar quadratic scaling with respect to the degrees of freedom.
Full article
(This article belongs to the Special Issue Computational Heat and Mass Transfer (ICCHMT 2025))
►▼
Show Figures

Figure 1
Open AccessArticle
Application of the Dynamic Latent Space Model to Social Networks with Time-Varying Covariates
by
Ziqian Xu and Zhiyong Zhang
Computation 2026, 14(2), 34; https://doi.org/10.3390/computation14020034 - 1 Feb 2026
Abstract
With the growing accessibility of tools such as online surveys and web scraping, longitudinal social network data are more commonly collected in social science research along with non-network survey data. Such data play a critical role in helping social scientists understand how relationships
[...] Read more.
With the growing accessibility of tools such as online surveys and web scraping, longitudinal social network data are more commonly collected in social science research along with non-network survey data. Such data play a critical role in helping social scientists understand how relationships develop and evolve over time. Existing dynamic network models such as the Stochastic Actor-Oriented Model and the Temporal Exponential Random Graph Model provide frameworks to analyze traits of both the networks and the external non-network covariates. However, research on the dynamic latent space model (DLSM) has focused mainly on factors intrinsic to the networks themselves. Despite some discussion, the role of non-network data such as contextual or behavioral covariates remain a topic to be further explored in the context of DLSMs. In this study, one application of the DLSM to incorporate dynamic non-network covariates collected alongside friendship networks using autoregressive processes is presented. By analyzing two friendship network datasets with different time points and psychological covariates, it is shown how external factors can contribute to a deeper understanding of social interaction dynamics over time.
Full article
(This article belongs to the Special Issue Applications of Machine Learning and Data Science Methods in Social Sciences)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Integrative Nutritional Assessment of Avocado Leaves Using Entropy-Weighted Spectral Indices and Fusion Learning
by
Zhen Guo, Juan Sebastian Estrada, Xingfeng Guo, Redmond R. Shamshiri, Marcelo Pereyra and Fernando Auat Cheein
Computation 2026, 14(2), 33; https://doi.org/10.3390/computation14020033 - 1 Feb 2026
Abstract
►▼
Show Figures
Accurate and non-destructive assessment of plant nutritional status remains a key challenge in precision agriculture, particularly under dynamic physiological conditions such as dehydration. Therefore, this study focused on developing an integrated nutritional assessment framework for avocado (Persea americana Mill.) leaves across progressive dehydration
[...] Read more.
Accurate and non-destructive assessment of plant nutritional status remains a key challenge in precision agriculture, particularly under dynamic physiological conditions such as dehydration. Therefore, this study focused on developing an integrated nutritional assessment framework for avocado (Persea americana Mill.) leaves across progressive dehydration stages using spectral analysis. A novel nutritional function index (NFI) was innovatively constructed using an entropy-weighted multi-criteria decision-making approach. This unified assessment metric integrated critical physiological indicators, such as moisture content, nitrogen content, and chlorophyll content estimated from soil and plant analyzer development (SPAD) readings. To enhance the prediction accuracy and interpretability of NFI, innovative vegetation indices (VIs) specifically tailored to NFI were systematically constructed using exhaustive wavelength-combination screening. Optimal wavelengths identified from short-wave infrared regions (1446, 1455, 1465, 1865, and 1937 nm) were employed to build physiologically meaningful VIs, which were highly sensitive to moisture and biochemical constituents. Feature wavelengths selected via the successive projections algorithm and competitive adaptive reweighted sampling further reduced spectral redundancy and improved modeling efficiency. Both feature-level and algorithm-level data fusion methods effectively combined VIs and selected feature wavelengths, significantly enhancing prediction performance. The stacking algorithm demonstrated robust performance, achieving the highest predictive accuracy (R2V = 0.986, RMSEV = 0.032) for NFI estimation. This fusion-based modeling approach outperformed conventional single-model schemes in terms of accuracy and robustness. Unlike previous studies that focused on isolated spectral predictors, this work introduces an integrative framework combining entropy-weighted feature synthesis and multiscale fusion learning. The developed strategy offers a powerful tool for real-time plant health monitoring and supports precision agricultural decision-making.
Full article

Graphical abstract
Open AccessArticle
A Study of the Efficiency of Parallel Computing for Constructing Bifurcation Diagrams of the Fractional Selkov Oscillator with Variable Coefficients and Memory
by
Dmitriy Tverdyi and Roman Parovik
Computation 2026, 14(2), 32; https://doi.org/10.3390/computation14020032 - 1 Feb 2026
Abstract
This paper presents a comprehensive performance analysis and practical implementation of a parallel algorithm for constructing bifurcation diagrams of the fractional Selkov oscillator with variable coefficients and memory (SFO). The primary contribution lies in the systematic benchmarking and validation of a coarse-grained parallelization
[...] Read more.
This paper presents a comprehensive performance analysis and practical implementation of a parallel algorithm for constructing bifurcation diagrams of the fractional Selkov oscillator with variable coefficients and memory (SFO). The primary contribution lies in the systematic benchmarking and validation of a coarse-grained parallelization strategy (MapReduce) applied to a computationally intensive class of problems—fractional-order systems with hereditary effects. We investigate the efficiency of a parallel algorithm that leverages central processing unit (CPU) capabilities to compute bifurcation diagrams of the Selkov fractional oscillator as a function of the characteristic time scale. The parallel algorithm is implemented in the ABMSelkovFracSim 2.0 software package using Python 3.13. This package also incorporates the Adams–Bashforth–Moulton numerical algorithm for obtaining numerical solutions to the Selkov fractional oscillator, thereby accounting for heredity (memory) effects. The Selkov fractional oscillator is a system of nonlinear ordinary differential equations with Gerasimov–Caputo derivatives of fractional order variables and non-constant coefficients, which include a characteristic time scale parameter to ensure dimensional consistency in the model equations. This paper evaluates the efficiency, speedup, and cost of the parallel algorithm, and determines its optimal configuration based on the number of worker processes. The optimal number of processes required to achieve maximum efficiency for the algorithm is determined. We apply the TAECO approach to evaluate the efficiency of the parallel algorithm: T (execution time), A (acceleration), E (efficiency), C (cost), O (cost optimality index). Graphs illustrating the efficiency characteristics of the parallel algorithm as functions of the number of CPU processes are provided.
Full article
(This article belongs to the Topic Fractional Calculus: Theory and Applications, 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
A Replication Study for Consumer Digital Twins: Pilot Sites Analysis and Experience from the SENDER Project (Horizon 2020)
by
Eleni Douvi, Dimitra Douvi, Jason Tsahalis and Haralabos-Theodoros Tsahalis
Computation 2026, 14(2), 31; https://doi.org/10.3390/computation14020031 - 1 Feb 2026
Abstract
The SENDER (Sustainable Consumer Engagement and Demand Response) project aims to develop an innovative interface that engages energy consumers in Demand Response (DR) programs by developing new technologies to predict energy consumption, enhance market flexibility, and manage the exploitation of Renewable Energy Sources
[...] Read more.
The SENDER (Sustainable Consumer Engagement and Demand Response) project aims to develop an innovative interface that engages energy consumers in Demand Response (DR) programs by developing new technologies to predict energy consumption, enhance market flexibility, and manage the exploitation of Renewable Energy Sources (RES). The current paper presents a replication study for consumer Digital Twins (DTs) that simulate energy consumption patterns and occupancy behaviors in various households across three pilot sites (Austria, Spain, Finland) based on six-month historical and real-time data related to loads, sensors, and relevant details for every household. Due to data limitations and inhomogeneity, we conducted a replication analysis focusing only on Austria and Spain, where available data regarding power and motion alarm sensors were sufficient, leading to a replication scenario by gradually increasing the number of households. In addition to limited data and short time of measurements, other challenges faced included inconsistencies in sensor installations and limited information on occupancy. In order to ensure reliable results, data was filtered, and households with common characteristics were grouped together to improve accuracy and consistency in DT modeling. Finally, it was concluded that a successful replication procedure requires sufficient continuous, frequent, and homogeneous data, along with its validation.
Full article
(This article belongs to the Special Issue Experiments/Process/System Modeling/Simulation/Optimization (IC-EPSMSO 2025))
►▼
Show Figures

Graphical abstract
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AppliedMath, Axioms, Computation, Mathematics, Symmetry
A Real-World Application of Chaos Theory
Topic Editors: Adil Jhangeer, Mudassar ImranDeadline: 28 February 2026
Topic in
Axioms, Computation, Fractal Fract, Mathematics, Symmetry
Fractional Calculus: Theory and Applications, 2nd Edition
Topic Editors: António Lopes, Liping Chen, Sergio Adriani David, Alireza AlfiDeadline: 30 May 2026
Topic in
Brain Sciences, NeuroSci, Applied Sciences, Mathematics, Computation
The Computational Brain
Topic Editors: William Winlow, Andrew JohnsonDeadline: 31 July 2026
Topic in
Sustainability, Remote Sensing, Forests, Applied Sciences, Computation
Artificial Intelligence, Remote Sensing and Digital Twin Driving Innovation in Sustainable Natural Resources and Ecology
Topic Editors: Huaiqing Zhang, Ting YunDeadline: 31 January 2027
Conferences
Special Issues
Special Issue in
Computation
Experiments/Process/System Modeling/Simulation/Optimization (IC-EPSMSO 2025)
Guest Editor: Demos T. TsahalisDeadline: 15 February 2026
Special Issue in
Computation
Advanced Computational Methods for PDEs in Optics and High-Performance Computing
Guest Editors: Svetislav Savovic, Miloš Ivanovic, Konstantinos AidinisDeadline: 28 February 2026
Special Issue in
Computation
Mathematical and Computational Modeling of Natural and Artificial Human Senses
Guest Editors: Gustavo Olague, Rocío Ochoa-Montiel, Isidro Robledo-Vega, Juan-Manuel Ahuactzin, Marlen Meza-SánchezDeadline: 30 March 2026
Special Issue in
Computation
Recent Advances on Computational Linguistics and Natural Language Processing
Guest Editors: Khaled Shaalan, Filippo PalombiDeadline: 30 April 2026



