Journal Description
Computation
Computation
is a peer-reviewed journal of computational science and engineering published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), CAPlus / SciFinder, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q2 (Mathematics, Interdisciplinary Applications) / CiteScore - Q1 (Applied Mathematics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 14.8 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Mathematics and Its Applications: AppliedMath, Axioms, Computation, Fractal and Fractional, Geometry, International Journal of Topology, Logics, Mathematics and Symmetry.
Impact Factor:
1.9 (2024);
5-Year Impact Factor:
1.9 (2024)
Latest Articles
Nonlinear System Modelling and Control: Trends, Challenges, and Future Perspectives
Computation 2026, 14(2), 44; https://doi.org/10.3390/computation14020044 - 3 Feb 2026
Abstract
Nonlinear systems engineering has undergone a profound transformation with the rapid development of computational tools and advanced analytical methods [...]
Full article
(This article belongs to the Special Issue Nonlinear System Modelling and Control)
Open AccessArticle
Methodology for Predicting Geochemical Anomalies Using Preprocessing of Input Geological Data and Dual Application of a Multilayer Perceptron
by
Daulet Akhmedov, Baurzhan Bekmukhamedov, Moldir Tanashova and Zulfiya Seitmuratova
Computation 2026, 14(2), 43; https://doi.org/10.3390/computation14020043 - 3 Feb 2026
Abstract
The increasing need for accurate prediction of geochemical anomalies requires methods capable of capturing complex spatial patterns that traditional approaches often fail to represent adequately. For N datasets of the form (Xi,Yi) representing the geographic coordinates of
[...] Read more.
The increasing need for accurate prediction of geochemical anomalies requires methods capable of capturing complex spatial patterns that traditional approaches often fail to represent adequately. For N datasets of the form (Xi,Yi) representing the geographic coordinates of sampling points and Ci denoting the geochemical measurement, training multilayer perceptrons (MLPs) presents a challenge. The low informativeness of the input features and their weak correlation with the target variable result in excessively simplified predictions. Analysis of a baseline model trained only on geographic coordinates showed that, while the loss function converges rapidly, the resulting values become overly “compressed” and fail to reflect the actual concentration range. To address this, a preprocessing method based on anisotropy was developed to enhance the correlation between input and output variables. This approach constructs, for each prediction point, a structured informational model that incorporates the direction and magnitude of spatial variability through sectoral and radial partitioning of the nearest sampling data. The transformed features are then used in a dual-MLP architecture, where the first network produces sectoral estimates, and the second aggregates them into the final prediction. The results show that anisotropic feature transformation significantly improves neural network prediction capabilities in geochemical analysis.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Information Inequalities for Five Random Variables
by
Laszlo Csirmaz and Elod P. Csirmaz
Computation 2026, 14(2), 42; https://doi.org/10.3390/computation14020042 - 2 Feb 2026
Abstract
The entropic region is formed by the collection of the Shannon entropies of all subvectors of finitely many jointly distributed discrete random variables. For four or more variables, the structure of the entropic region is mostly unknown. We utilize a variant of the
[...] Read more.
The entropic region is formed by the collection of the Shannon entropies of all subvectors of finitely many jointly distributed discrete random variables. For four or more variables, the structure of the entropic region is mostly unknown. We utilize a variant of the Maximum Entropy Method to obtain five-variable non-Shannon entropy inequalities, which delimit the five-variable entropy region. This method adds copies of some of the random variables in generations. A significant reduction in computational complexity, achieved through theoretical considerations and by harnessing the inherent symmetries, allowed us to calculate all five-variable non-Shannon inequalities provided by the first nine generations. Based on the results, we define two infinite collections of such inequalities and prove them to be entropy inequalities. We investigate downward-closed subsets of non-negative lattice points that parameterize these collections, and based on this, we develop an algorithm to enumerate all extremal inequalities. The discovered set of entropy inequalities is conjectured to characterize the applied method completely.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Modelling of Batch Fermentation Processes of Ethanol Production by Kluyveromyces marxianus
by
Olympia Roeva, Anastasiya Zlatkova, Velislava Lyubenova, Maya Ignatova, Denitsa Kristeva, Gergana Roeva and Dafina Zoteva
Computation 2026, 14(2), 41; https://doi.org/10.3390/computation14020041 - 2 Feb 2026
Abstract
A representative cluster-based model of the batch process of ethanol production by Kluyveromyces sp. is proposed. Experimental data from fermentation processes of 17 different strains of K. marxianus are used; each of them potentially exhibits different metabolic and kinetic behavior. Three algorithms for
[...] Read more.
A representative cluster-based model of the batch process of ethanol production by Kluyveromyces sp. is proposed. Experimental data from fermentation processes of 17 different strains of K. marxianus are used; each of them potentially exhibits different metabolic and kinetic behavior. Three algorithms for clustering are applied. Two modifications of Principal Component Analysis (PCA)—hierarchical clustering and k-means clustering; and InterCriteria Analysis (ICrA) are used to simplify a large dataset into a smaller set while preserving as much information as possible. The experimental data are organized into two main clusters. As a result, the most representative fermentation processes are identified. For each of the fermentation processes in the clusters, structural and parameter identification are performed. Four different structures describing the specific substrate (glucose) consumption rate are applied. The best structure is used to derive the representative model using the data from the first cluster. Verification of the derived model is performed using experimental data of the second cluster. Model parameter identification is performed by applying an evolutionary optimization algorithm.
Full article
(This article belongs to the Section Computational Biology)
Open AccessArticle
Can Generative AI Co-Evolve with Human Guidance and Display Non-Utilitarian Moral Behavior?
by
Rafael Lahoz-Beltra
Computation 2026, 14(2), 40; https://doi.org/10.3390/computation14020040 - 2 Feb 2026
Abstract
The growing presence of autonomous AI systems, such as self-driving cars and humanoid robots, raises critical ethical questions about how these technologies should make moral decisions. Most existing moral machine (MM) models rely on secular, utilitarian principles, which prioritize the greatest good for
[...] Read more.
The growing presence of autonomous AI systems, such as self-driving cars and humanoid robots, raises critical ethical questions about how these technologies should make moral decisions. Most existing moral machine (MM) models rely on secular, utilitarian principles, which prioritize the greatest good for the greatest number but often overlook the religious and cultural values that shape moral reasoning across different traditions. This paper explores how theological perspectives, particularly those from Christian, Islamic, and East Asian ethical frameworks, can inform and enrich algorithmic ethics in autonomous systems. By integrating these religious values, the study proposes a more inclusive approach to AI decision making that respects diverse beliefs. A key innovation of this research is the use of large language models (LLMs), such as ChatGPT (GPT-5.2), to design with human guidance MM architectures that incorporate these ethical systems. Through Python 3 scripts, the paper demonstrates how autonomous machines, e.g., vehicles and humanoid robots, can make ethically informed decisions based on different religious principles. The aim is to contribute to the development of AI systems that are not only technologically advanced but also culturally sensitive and ethically responsible, ensuring that they align with a wide range of theological values in morally complex situations.
Full article
(This article belongs to the Section Computational Social Science)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Semi-Empirical Estimation of Aerosol Particle Influence at the Performance of Terrestrial FSO Links over the Sea
by
Argyris N. Stassinakis, Efstratios V. Chatzikontis, Kyle R. Drexler, Andreas D. Tsigopoulos, Gratchia Mkrttchian and Hector E. Nistazakis
Computation 2026, 14(2), 39; https://doi.org/10.3390/computation14020039 - 2 Feb 2026
Abstract
Free-space optical (FSO) communication enables high-bandwidth license-free data transmission and is particularly attractive for maritime point-to-point links. However, FSO performance is strongly affected by atmospheric conditions. This work presents a semi-empirical model quantifying the impact of fine particulate matter (PM2.5) on received optical
[...] Read more.
Free-space optical (FSO) communication enables high-bandwidth license-free data transmission and is particularly attractive for maritime point-to-point links. However, FSO performance is strongly affected by atmospheric conditions. This work presents a semi-empirical model quantifying the impact of fine particulate matter (PM2.5) on received optical power in a maritime FSO link. The model is derived from long-term experimental measurements collected over a 2.96 km horizontal optical path above the sea surface, combining received signal strength indicator (RSSI) data with co-located PM2.5 observations. Statistical analysis reveals a strong negative correlation between PM2.5 concentration and received optical power (Pearson coefficient −0.748). Using a logarithmic attenuation formulation, the PM2.5-induced attenuation is estimated to increase by approximately 0.0026 dB/km per µg/m3 of PM2.5 concentration. A second-order semi-empirical model captures the observed nonlinear attenuation behavior with a coefficient of determination of R2 = 0.57. The proposed model provides a practical tool for link budgeting, performance forecasting, and adaptive design of maritime FSO systems operating in aerosol-rich environments.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Development of a Dashboard for Simulation Workflow Visualization and Optimization of an Ammonia Synthesis Reactor in the HySTrAm Project (Horizon EU)
by
Eleni Douvi, Dimitra Douvi, Jason Tsahalis and Haralabos-Theodoros Tsahalis
Computation 2026, 14(2), 38; https://doi.org/10.3390/computation14020038 - 2 Feb 2026
Abstract
Although hydrogen plays a crucial role in the EU’s strategy to reduce greenhouse gas emissions, its storage and transport are technically challenging. If ammonia is produced efficiently, it can be a promising hydrogen carrier, especially in decentralized and flexible conditions. The Horizon EU
[...] Read more.
Although hydrogen plays a crucial role in the EU’s strategy to reduce greenhouse gas emissions, its storage and transport are technically challenging. If ammonia is produced efficiently, it can be a promising hydrogen carrier, especially in decentralized and flexible conditions. The Horizon EU HySTrAm project addresses this problem by developing a small-scale, containerized demonstration plant consisting of (1) a short-term hydrogen storage container using novel ultraporous materials optimized through machine learning, and (2) an ammonia synthesis reactor based on an improved low-pressure Haber–Bosch process. This paper presents an initial version of a Python (v3.9)-based dashboard designed to visualize and optimize the simulation workflow of the ammonia synthesis process. Designed as a baseline for a future online, automated tool, the dashboard allows the comparison of three reactor configurations already defined through simulations and aligned with the upcoming experimental campaign: single tube, two reactors in parallel swing mode and two reactors in series. Pressures at the inlet/outlet, temperatures across the reactor, operation recipe and ammonia production over time are displayed dynamically to evaluate the performance of the reactor. Future versions will include optimization features, such as the identification of optimal operating modes, the reduction of production time, an increase of productivity, and catalyst degradation estimation.
Full article
(This article belongs to the Special Issue Experiments/Process/System Modeling/Simulation/Optimization (IC-EPSMSO 2025))
►▼
Show Figures

Figure 1
Open AccessArticle
LocRes–PINN: A Physics–Informed Neural Network with Local Awareness and Residual Learning
by
Tangying Lv, Wenming Yin, Hengkai Yao, Qingliang Liu, Yitong Sun, Kuan Zhao and Shanliang Zhu
Computation 2026, 14(2), 37; https://doi.org/10.3390/computation14020037 - 2 Feb 2026
Abstract
Physics–Informed Neural Networks (PINNs) have demonstrated efficacy in solving both forward and inverse problems for nonlinear partial differential equations (PDEs). However, they frequently struggle to accurately capture multiscale physical features, particularly in regions exhibiting sharp local variations such as shock waves and discontinuities,
[...] Read more.
Physics–Informed Neural Networks (PINNs) have demonstrated efficacy in solving both forward and inverse problems for nonlinear partial differential equations (PDEs). However, they frequently struggle to accurately capture multiscale physical features, particularly in regions exhibiting sharp local variations such as shock waves and discontinuities, and often suffer from optimization difficulties in complex loss landscapes. To address these issues, we propose LocRes–PINN, a physics–informed neural network framework that integrates local awareness mechanisms with residual learning. This framework integrates a radial basis function (RBF) encoder to enhance the perception of local variations and embeds it within a residual backbone to facilitate stable gradient propagation. Furthermore, we incorporate a residual–based adaptive refinement strategy and an adaptive weighted loss scheme to dynamically focus training on high–error regions and balance multi–objective constraints. Numerical experiments on the Extended Korteweg–de Vries, Navier–Stokes, and Burgers equations demonstrate that LocRes–PINN reduces relative prediction errors by approximately 12% to 67% compared to standard benchmarks. The results also verify the model’s robustness in parameter identification and noise resilience.
Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
►▼
Show Figures

Graphical abstract
Open AccessArticle
A Method for Road Spectrum Identification in Real-Vehicle Tests by Fusing Time-Frequency Domain Features
by
Biao Qiu and Chaiyan Jettanasen
Computation 2026, 14(2), 36; https://doi.org/10.3390/computation14020036 - 2 Feb 2026
Abstract
Most unpaved roads are subjectively classified as Class D roads. However, significant variations exist across different sites and environments (e.g., mining areas). A major challenge in the engineering field is how to quickly correct the Power Spectral Density (PSD) of the unpaved road
[...] Read more.
Most unpaved roads are subjectively classified as Class D roads. However, significant variations exist across different sites and environments (e.g., mining areas). A major challenge in the engineering field is how to quickly correct the Power Spectral Density (PSD) of the unpaved road in question using existing equipment and limited sensors. To address this issue, this study combines real-vehicle test data with a suspension dynamics simulation model. It employs time-domain reconstruction via Inverse Fast Fourier Transform (IFFT) and wavelet processing methods to construct an optimized model that fuses time-frequency domain features. With the help of a surrogate optimization method, the model achieves the best approximation of the actual road surface, corrects the PSD parameters of the unpaved road, and provides a reliable input basis for vehicle dynamics simulation, fatigue life prediction, and performance evaluation.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Comparison of Lagrangian and Isogeometric Boundary Element Formulations for Orthotropic Heat Conduction Problems
by
Ege Erdoğan and Barbaros Çetin
Computation 2026, 14(2), 35; https://doi.org/10.3390/computation14020035 - 2 Feb 2026
Abstract
Orthotropic materials are increasingly employed in advanced thermal systems due to their direction-dependent heat transfer characteristics. Accurate numerical modeling of heat conduction in such media remains challenging, particularly for 3D geometries with nonlinear boundary conditions and internal heat generation. In this study, conventional
[...] Read more.
Orthotropic materials are increasingly employed in advanced thermal systems due to their direction-dependent heat transfer characteristics. Accurate numerical modeling of heat conduction in such media remains challenging, particularly for 3D geometries with nonlinear boundary conditions and internal heat generation. In this study, conventional boundary element method (BEM) and isogeometric boundary element method (IGABEM) formulations are developed and compared for steady-state orthotropic heat conduction problems. A coordinate transformation is adopted to map the anisotropic governing equation onto an equivalent isotropic form, enabling the use of classical Laplace fundamental solutions. Volumetric heat generation is incorporated via the radial integration method (RIM), preserving the boundary-only discretization, while nonlinear Robin boundary conditions are treated using variable condensation and a Newton–Raphson iterative scheme. The performance of both methods is evaluated using a hollow ellipsoidal benchmark problem with available analytical solutions. The results demonstrate that IGABEM provides higher accuracy and smoother convergence than conventional BEM, particularly for higher-order discretizations, which is owing to its exact geometric representation and higher continuity. Although IGABEM involves additional computational overhead due to NURBS evaluations, both methods exhibit similar quadratic scaling with respect to the degrees of freedom.
Full article
(This article belongs to the Special Issue Computational Heat and Mass Transfer (ICCHMT 2025))
►▼
Show Figures

Figure 1
Open AccessArticle
Application of the Dynamic Latent Space Model to Social Networks with Time-Varying Covariates
by
Ziqian Xu and Zhiyong Zhang
Computation 2026, 14(2), 34; https://doi.org/10.3390/computation14020034 - 1 Feb 2026
Abstract
With the growing accessibility of tools such as online surveys and web scraping, longitudinal social network data are more commonly collected in social science research along with non-network survey data. Such data play a critical role in helping social scientists understand how relationships
[...] Read more.
With the growing accessibility of tools such as online surveys and web scraping, longitudinal social network data are more commonly collected in social science research along with non-network survey data. Such data play a critical role in helping social scientists understand how relationships develop and evolve over time. Existing dynamic network models such as the Stochastic Actor-Oriented Model and the Temporal Exponential Random Graph Model provide frameworks to analyze traits of both the networks and the external non-network covariates. However, research on the dynamic latent space model (DLSM) has focused mainly on factors intrinsic to the networks themselves. Despite some discussion, the role of non-network data such as contextual or behavioral covariates remain a topic to be further explored in the context of DLSMs. In this study, one application of the DLSM to incorporate dynamic non-network covariates collected alongside friendship networks using autoregressive processes is presented. By analyzing two friendship network datasets with different time points and psychological covariates, it is shown how external factors can contribute to a deeper understanding of social interaction dynamics over time.
Full article
(This article belongs to the Special Issue Applications of Machine Learning and Data Science Methods in Social Sciences)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Integrative Nutritional Assessment of Avocado Leaves Using Entropy-Weighted Spectral Indices and Fusion Learning
by
Zhen Guo, Juan Sebastian Estrada, Xingfeng Guo, Redmond Shanshir, Marcelo Pereya and Fernando Auat Cheein
Computation 2026, 14(2), 33; https://doi.org/10.3390/computation14020033 - 1 Feb 2026
Abstract
►▼
Show Figures
Accurate and non-destructive assessment of plant nutritional status remains a key challenge in precision agriculture, particularly under dynamic physiological conditions such as dehydration. Therefore, this study focused on developing an integrated nutritional assessment framework for avocado (Persea americana Mill.) leaves across progressive dehydration
[...] Read more.
Accurate and non-destructive assessment of plant nutritional status remains a key challenge in precision agriculture, particularly under dynamic physiological conditions such as dehydration. Therefore, this study focused on developing an integrated nutritional assessment framework for avocado (Persea americana Mill.) leaves across progressive dehydration stages using spectral analysis. A novel nutritional function index (NFI) was innovatively constructed using an entropy-weighted multi-criteria decision-making approach. This unified assessment metric integrated critical physiological indicators, such as moisture content, nitrogen content, and chlorophyll content estimated from soil and plant analyzer development (SPAD) readings. To enhance the prediction accuracy and interpretability of NFI, innovative vegetation indices (VIs) specifically tailored to NFI were systematically constructed using exhaustive wavelength-combination screening. Optimal wavelengths identified from short-wave infrared regions (1446, 1455, 1465, 1865, and 1937 nm) were employed to build physiologically meaningful VIs, which were highly sensitive to moisture and biochemical constituents. Feature wavelengths selected via the successive projections algorithm and competitive adaptive reweighted sampling further reduced spectral redundancy and improved modeling efficiency. Both feature-level and algorithm-level data fusion methods effectively combined VIs and selected feature wavelengths, significantly enhancing prediction performance. The stacking algorithm demonstrated robust performance, achieving the highest predictive accuracy (R2V = 0.986, RMSEV = 0.032) for NFI estimation. This fusion-based modeling approach outperformed conventional single-model schemes in terms of accuracy and robustness. Unlike previous studies that focused on isolated spectral predictors, this work introduces an integrative framework combining entropy-weighted feature synthesis and multiscale fusion learning. The developed strategy offers a powerful tool for real-time plant health monitoring and supports precision agricultural decision-making.
Full article

Graphical abstract
Open AccessArticle
A Study of the Efficiency of Parallel Computing for Constructing Bifurcation Diagrams of the Fractional Selkov Oscillator with Variable Coefficients and Memory
by
Dmitriy Tverdyi and Roman Parovik
Computation 2026, 14(2), 32; https://doi.org/10.3390/computation14020032 - 1 Feb 2026
Abstract
This paper presents a comprehensive performance analysis and practical implementation of a parallel algorithm for constructing bifurcation diagrams of the fractional Selkov oscillator with variable coefficients and memory (SFO). The primary contribution lies in the systematic benchmarking and validation of a coarse-grained parallelization
[...] Read more.
This paper presents a comprehensive performance analysis and practical implementation of a parallel algorithm for constructing bifurcation diagrams of the fractional Selkov oscillator with variable coefficients and memory (SFO). The primary contribution lies in the systematic benchmarking and validation of a coarse-grained parallelization strategy (MapReduce) applied to a computationally intensive class of problems—fractional-order systems with hereditary effects. We investigate the efficiency of a parallel algorithm that leverages central processing unit (CPU) capabilities to compute bifurcation diagrams of the Selkov fractional oscillator as a function of the characteristic time scale. The parallel algorithm is implemented in the ABMSelkovFracSim 2.0 software package using Python 3.13. This package also incorporates the Adams–Bashforth–Moulton numerical algorithm for obtaining numerical solutions to the Selkov fractional oscillator, thereby accounting for heredity (memory) effects. The Selkov fractional oscillator is a system of nonlinear ordinary differential equations with Gerasimov–Caputo derivatives of fractional order variables and non-constant coefficients, which include a characteristic time scale parameter to ensure dimensional consistency in the model equations. This paper evaluates the efficiency, speedup, and cost of the parallel algorithm, and determines its optimal configuration based on the number of worker processes. The optimal number of processes required to achieve maximum efficiency for the algorithm is determined. We apply the TAECO approach to evaluate the efficiency of the parallel algorithm: T (execution time), A (acceleration), E (efficiency), C (cost), O (cost optimality index). Graphs illustrating the efficiency characteristics of the parallel algorithm as functions of the number of CPU processes are provided.
Full article
(This article belongs to the Topic Fractional Calculus: Theory and Applications, 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
A Replication Study for Consumer Digital Twins: Pilot Sites Analysis and Experience from the SENDER Project (Horizon 2020)
by
Eleni Douvi, Dimitra Douvi, Jason Tsahalis and Haralabos-Theodoros Tsahalis
Computation 2026, 14(2), 31; https://doi.org/10.3390/computation14020031 - 1 Feb 2026
Abstract
The SENDER (Sustainable Consumer Engagement and Demand Response) project aims to develop an innovative interface that engages energy consumers in Demand Response (DR) programs by developing new technologies to predict energy consumption, enhance market flexibility, and manage the exploitation of Renewable Energy Sources
[...] Read more.
The SENDER (Sustainable Consumer Engagement and Demand Response) project aims to develop an innovative interface that engages energy consumers in Demand Response (DR) programs by developing new technologies to predict energy consumption, enhance market flexibility, and manage the exploitation of Renewable Energy Sources (RES). The current paper presents a replication study for consumer Digital Twins (DTs) that simulate energy consumption patterns and occupancy behaviors in various households across three pilot sites (Austria, Spain, Finland) based on six-month historical and real-time data related to loads, sensors, and relevant details for every household. Due to data limitations and inhomogeneity, we conducted a replication analysis focusing only on Austria and Spain, where available data regarding power and motion alarm sensors were sufficient, leading to a replication scenario by gradually increasing the number of households. In addition to limited data and short time of measurements, other challenges faced included inconsistencies in sensor installations and limited information on occupancy. In order to ensure reliable results, data was filtered, and households with common characteristics were grouped together to improve accuracy and consistency in DT modeling. Finally, it was concluded that a successful replication procedure requires sufficient continuous, frequent, and homogeneous data, along with its validation.
Full article
(This article belongs to the Special Issue Experiments/Process/System Modeling/Simulation/Optimization (IC-EPSMSO 2025))
►▼
Show Figures

Graphical abstract
Open AccessArticle
Analyzing the Impact of Vandalism, Hoarding, and Strikes on Fuel Distribution in Nigeria
by
Adam Ajimoti Ishaq, Kazeem Babatunde Akande, Samuel T. Akinyemi, Adejimi A. Adeniji, Kekana C. Malesela and Kayode Oshinubi
Computation 2026, 14(2), 30; https://doi.org/10.3390/computation14020030 - 1 Feb 2026
Abstract
►▼
Show Figures
Fuel scarcity remains a recurrent challenge in Nigeria, with significant socioeconomic consequences despite the country’s status as a major crude oil producer. This study develops a novel deterministic mathematical model to examine the dynamics of petroleum product distribution in Nigeria’s downstream sector, with
[...] Read more.
Fuel scarcity remains a recurrent challenge in Nigeria, with significant socioeconomic consequences despite the country’s status as a major crude oil producer. This study develops a novel deterministic mathematical model to examine the dynamics of petroleum product distribution in Nigeria’s downstream sector, with particular emphasis on Premium Motor Spirit (PMS). The model explicitly incorporates key disruption and behavioral mechanisms: pipeline vandalism, industrial actions, product diversion, and hoarding that collectively drive persistent fuel shortages. The model’s feasibility, positivity of solutions, and existence and uniqueness were established, ensuring consistency with real-world operational conditions. Five equilibrium points were identified, reflecting distinct operational regimes within the distribution network. A critical distribution threshold was analytically derived and numerically validated, revealing that a minimum supply of approximately 42 million liters of PMS per day is required to satisfy demand and eliminate fuel queues. Local and global stability analyses, conducted using Lyapunov functions and the Routh–Hurwitz criteria, demonstrate that stable fuel distribution is achievable under effective policy coordination and stakeholder compliance. Numerical simulations show that hoarding by private retail marketers substantially intensifies scarcity, while industrial actions by transporters exert a more severe disruption than pipeline vandalism. The results further highlight the stabilizing role of alternative transportation routes, such as rail systems, in mitigating infrastructure failures and road-based logistics risks. Although refinery sources are aggregated and rail transport is idealized, the proposed framework offers a robust and adaptable tool for policy analysis, with relevance to both oil-producing and fuel-import-dependent economies.
Full article

Figure 1
Open AccessEditorial
Advanced Topology Optimization: Methods and Applications
by
Yun-Fei Fu
Computation 2026, 14(2), 29; https://doi.org/10.3390/computation14020029 - 29 Jan 2026
Abstract
Structural topology optimization is a powerful computational design paradigm that seeks the most efficient material distribution within a prescribed design domain to satisfy given performance requirements [...]
Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Open AccessArticle
Method for Simulating Solar Panel Oscillations Considering Thermal Shock
by
Andrey V. Sedelnikov and Alexandra S. Marshalkina
Computation 2026, 14(2), 28; https://doi.org/10.3390/computation14020028 - 24 Jan 2026
Abstract
The purpose of this work is to develop an approximate method for simulating the oscillations of a solar panel with consideration of thermal shock, based on a simulated spacecraft system model. The influence of thermal shock is reduced to an additional rotation of
[...] Read more.
The purpose of this work is to develop an approximate method for simulating the oscillations of a solar panel with consideration of thermal shock, based on a simulated spacecraft system model. The influence of thermal shock is reduced to an additional rotation of the spacecraft. The mechanical system itself (the spacecraft model) consists of a main body (a rigid body) and a flexible solar panel. The solar panel performs natural oscillations. An analysis of the influence of thermal shock on the parameters of natural oscillations was conducted. Results of computer simulation for a spacecraft configuration with a single solar panel are presented.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessReview
State-of-the-Art Overview of Smooth-Edged Material Distribution for Optimizing Topology (SEMDOT) Algorithm
by
Minyan Liu, Wanghua Hu, Xuhui Gong, Hao Zhou and Baolin Zhao
Computation 2026, 14(1), 27; https://doi.org/10.3390/computation14010027 - 21 Jan 2026
Abstract
Topology optimization is a powerful and efficient design tool, but the structures obtained by element-based topology optimization methods are often limited by fuzzy or jagged boundaries. The smooth-edged material distribution for optimizing topology algorithm (SEMDOT) can effectively deal with this problem and promote
[...] Read more.
Topology optimization is a powerful and efficient design tool, but the structures obtained by element-based topology optimization methods are often limited by fuzzy or jagged boundaries. The smooth-edged material distribution for optimizing topology algorithm (SEMDOT) can effectively deal with this problem and promote the practical application of topology optimization structures. This review outlines the theoretical evolution of SEMDOT, including both penalty-based and non-penalty-based formulations, while also providing access to open access codes. SEMDOT’s applications cover diverse areas, including self-supporting structures, energy-efficient manufacturing, bone tissue scaffolds, heat transfer systems, and building parts, demonstrating the versatility of SEMDOT. While SEMDOT addresses boundary issues in topology optimization structures, further theoretical refinement is needed to develop it into a comprehensive platform. This work consolidates the advances in SEMDOT, highlights its interdisciplinary impact, and identifies future research and implementation directions.
Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Regression Extensions of the New Polynomial Exponential Distribution: NPED-GLM and Poisson–NPED Count Models with Applications in Engineering and Insurance
by
Halim Zeghdoudi, Sandra S. Ferreira, Vinoth Raman and Dário Ferreira
Computation 2026, 14(1), 26; https://doi.org/10.3390/computation14010026 - 21 Jan 2026
Abstract
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological
[...] Read more.
The New Polynomial Exponential Distribution (NPED), introduced by Beghriche et al. (2022), provides a flexible one-parameter family capable of representing diverse hazard shapes and heavy-tailed behavior. Regression frameworks based on the NPED, however, have not yet been established. This paper introduces two methodological extensions: (i) a generalized linear model (NPED-GLM) in which the distribution parameter depends on covariates, and (ii) a Poisson–NPED count regression model suitable for overdispersed and heavy-tailed count data. Likelihood-based inference, asymptotic properties, and simulation studies are developed to investigate the performance of the estimators. Applications to engineering failure-count data and insurance claim frequencies illustrate the advantages of the proposed models relative to classical Poisson, negative binomial, and Poisson–Lindley regressions. These developments substantially broaden the applicability of the NPED in actuarial science, reliability engineering, and applied statistics.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Embedding-Based Alignments Capture Structural and Sequence Domains of Distantly Related Multifunctional Human Proteins
by
Gabriele Vazzana, Matteo Manfredi, Castrense Savojardo, Pier Luigi Martelli and Rita Casadio
Computation 2026, 14(1), 25; https://doi.org/10.3390/computation14010025 - 20 Jan 2026
Abstract
Protein embedding is a protein representation that carries along the information derived from filtering large volumes of sequences stored in large archives. Routinely, the protein is represented by a matrix in which each residue is a context-specific vector whose dimensions reflect the size
[...] Read more.
Protein embedding is a protein representation that carries along the information derived from filtering large volumes of sequences stored in large archives. Routinely, the protein is represented by a matrix in which each residue is a context-specific vector whose dimensions reflect the size of the large architectures of neural networks (transformers) trained with deep learning algorithms on large volumes of sequences. A recently introduced method (Embedding-Based Alignment, EBA) is particularly suited for pairwise embedding comparisons and, as we report here, allows for remote homolog detection under specific constraints, including protein sequence length similarity. Multifunctional proteins are present in different species. However, particularly in humans, the problem of their structural and functional annotation is urgent since, according to recent statistics, they comprise up to 50% of the human reference proteome. In this paper we show that when EBA is applied to a set of randomly selected multifunctional human proteins, it retrieves, after a clustering procedure and rigorous validation on the reference Swiss-Prot database, proteins that are remote homologs to each other and carry similar structural and functional features as the query protein.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
22 January 2026
“Do Not Be Afraid of New Things”: Prof. Michele Parrinello on Scientific Curiosity and the Importance of Fundamental Research
“Do Not Be Afraid of New Things”: Prof. Michele Parrinello on Scientific Curiosity and the Importance of Fundamental Research
6 November 2025
MDPI Launches the Michele Parrinello Award for Pioneering Contributions in Computational Physical Science
MDPI Launches the Michele Parrinello Award for Pioneering Contributions in Computational Physical Science
Topics
Topic in
AppliedMath, Axioms, Computation, Mathematics, Symmetry
A Real-World Application of Chaos Theory
Topic Editors: Adil Jhangeer, Mudassar ImranDeadline: 28 February 2026
Topic in
Axioms, Computation, Fractal Fract, Mathematics, Symmetry
Fractional Calculus: Theory and Applications, 2nd Edition
Topic Editors: António Lopes, Liping Chen, Sergio Adriani David, Alireza AlfiDeadline: 30 May 2026
Topic in
Brain Sciences, NeuroSci, Applied Sciences, Mathematics, Computation
The Computational Brain
Topic Editors: William Winlow, Andrew JohnsonDeadline: 31 July 2026
Topic in
Sustainability, Remote Sensing, Forests, Applied Sciences, Computation
Artificial Intelligence, Remote Sensing and Digital Twin Driving Innovation in Sustainable Natural Resources and Ecology
Topic Editors: Huaiqing Zhang, Ting YunDeadline: 31 January 2027
Conferences
Special Issues
Special Issue in
Computation
Experiments/Process/System Modeling/Simulation/Optimization (IC-EPSMSO 2025)
Guest Editor: Demos T. TsahalisDeadline: 15 February 2026
Special Issue in
Computation
Advanced Computational Methods for PDEs in Optics and High-Performance Computing
Guest Editors: Svetislav Savovic, Miloš Ivanovic, Konstantinos AidinisDeadline: 28 February 2026
Special Issue in
Computation
Multiscale Modeling on Energy Storage Devices: Bridging Atomistic to System-Level Insights
Guest Editor: Diego E. Galvez-ArandaDeadline: 3 March 2026
Special Issue in
Computation
Mathematical and Computational Modeling of Natural and Artificial Human Senses
Guest Editors: Gustavo Olague, Rocío Ochoa-Montiel, Isidro Robledo-Vega, Juan-Manuel Ahuactzin, Marlen Meza-SánchezDeadline: 30 March 2026


