Next Issue
Volume 13, April
Previous Issue
Volume 13, February
 
 

Computation, Volume 13, Issue 3 (March 2025) – 21 articles

Cover Story (view full-size image): In the following work, an up-scaling framework in a multi-scale setting is presented to calibrate a stochastic material model. The computational setup consists of a continuum coarse-scale model and a discrete fine-scale model that is inherently random. The objective is to calibrate the coarse-scale parameters using the measurements from the fine-scale model to enable the former to reflect the random nature of the latter with an acceptable level of approximation. The up-scaling task is performed using a generalized version of the Kalman filter, employing a functional approximation of the involved parameters, in a non-intrusive manner. Moreover, the proposed approach offers greater flexibility in terms of the selection of completely different material models on both coarse and fine scales, as evident from the demonstrated numerical example. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 7877 KiB  
Article
A Molecular Dynamics Simulation on the Methane Adsorption in Nanopores of Shale
by Qiuye Yuan, Jinghua Yang, Shuxia Qiu and Peng Xu
Computation 2025, 13(3), 79; https://doi.org/10.3390/computation13030079 - 20 Mar 2025
Viewed by 214
Abstract
Gas adsorption in nanoscale pores is one of the key theoretical bases for shale gas development. However, the influence mechanisms of gas adsorption capacity and the second adsorption layer in nanoscale pores are very complex, and are difficult to directly observe by using [...] Read more.
Gas adsorption in nanoscale pores is one of the key theoretical bases for shale gas development. However, the influence mechanisms of gas adsorption capacity and the second adsorption layer in nanoscale pores are very complex, and are difficult to directly observe by using traditional experimental methods. Therefore, multilayer graphene is used to model the nanopores in a shale reservoir, and the molecular dynamics method is carried out to study the adsorption dynamics of methane molecules. The results show that the adsorption density of methane molecules is inversely proportional to the temperature and pore size, and it positively correlates to the graphene layer number and pressure. The smaller adsorption region will reach the adsorption equilibrium state earlier, and the adsorption layer thickness is smaller. When the pore size is larger than 1.7 nm, the single-layer adsorption becomes double-layer adsorption of methane molecules. The peak of the second adsorption layer depends on the pressure and temperature, while the position of the second adsorption layer depends on the pore size. The present work is useful for understanding the dynamics mechanism of gas molecules in a nanoscale confined space, and may provide a theoretical basis for the development of unconventional natural gas. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

20 pages, 2405 KiB  
Review
A Bibliometric Review of Deep Learning Approaches in Skin Cancer Research
by Catur Supriyanto, Abu Salam, Junta Zeniarja, Danang Wahyu Utomo, Ika Novita Dewi, Cinantya Paramita, Adi Wijaya and Noor Zuraidin Mohd Safar
Computation 2025, 13(3), 78; https://doi.org/10.3390/computation13030078 - 19 Mar 2025
Viewed by 551
Abstract
Early detection of skin cancer is crucial for successful treatment and improved patient outcomes. Medical images play a vital role in this process, serving as the primary data source for both traditional and modern diagnostic approaches. This study aims to provide an overview [...] Read more.
Early detection of skin cancer is crucial for successful treatment and improved patient outcomes. Medical images play a vital role in this process, serving as the primary data source for both traditional and modern diagnostic approaches. This study aims to provide an overview of the significant role of medical images in skin cancer detection and highlight developments in the use of deep learning for early diagnosis. The scope of this survey includes an in-depth exploration of state-of-the-art deep learning methods, an evaluation of public datasets commonly used for training and validation, and a bibliometric analysis of recent advancements in the field. This survey focuses on publications in the Scopus database from 2019 to 2024. The search string is used to find articles by their abstracts, titles, and keywords, and includes several public datasets, like HAM and ISIC, ensuring relevance to the topic. Filters are applied based on the year, document type, source type, and language. The analysis identified 1697 articles, predominantly comprising journal articles and conference proceedings. The analysis shows that the number of articles has increased over the past five years. This growth is driven not only by developed countries but also by developing countries. Dermatology departments in various hospitals play a significant role in advancing skin cancer detection methods. In addition to identifying publication trends, this study also reveals underexplored areas to encourage new explorations using the VOSviewer and Bibliometrix applications. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis—2nd Edition)
Show Figures

Figure 1

19 pages, 530 KiB  
Article
Analysis of a Queueing Model with Flexible Priority, Batch Arrival, and Impatient Customers
by Alexander Dudin, Olga Dudina, Sergei Dudin and Agassi Melikov
Computation 2025, 13(3), 77; https://doi.org/10.3390/computation13030077 - 18 Mar 2025
Viewed by 180
Abstract
In this study, we consider a multi-server priority queueing model with batch arrivals of two types of customers, a finite buffer, and two input finite buffers for storing customers that cannot be admitted for service immediately upon arrival. The transition of a customer [...] Read more.
In this study, we consider a multi-server priority queueing model with batch arrivals of two types of customers, a finite buffer, and two input finite buffers for storing customers that cannot be admitted for service immediately upon arrival. The transition of a customer from an input buffer to the main buffer can occur after an exponentially distributed time. Customers residing in the input and main buffers are impatient. The four-dimensional Markov chain is used to describe the dynamics of the system under consideration. It is analyzed via the derivation of its generator and providing an effective algorithm for computing its steady-state probabilities. Formulas for calculating the system’s major performance metrics are established. Numerical results demonstrating the suggested methods’ viability and the effect of variation of transition rates of customers from the input buffers are presented. Full article
Show Figures

Figure 1

22 pages, 3100 KiB  
Article
Evaluating Predictive Models for Three Green Finance Markets: Insights from Statistical vs. Machine Learning Approaches
by Sonia Benghiat and Salim Lahmiri
Computation 2025, 13(3), 76; https://doi.org/10.3390/computation13030076 - 14 Mar 2025
Viewed by 479
Abstract
As climate change has become of eminent importance in the last two decades, so has interest in industry-wide carbon emissions and policies promoting a low-carbon economy. Investors and policymakers could improve their decision-making by producing accurate forecasts of relevant green finance market indices: [...] Read more.
As climate change has become of eminent importance in the last two decades, so has interest in industry-wide carbon emissions and policies promoting a low-carbon economy. Investors and policymakers could improve their decision-making by producing accurate forecasts of relevant green finance market indices: carbon efficiency, clean energy, and sustainability. The purpose of this paper is to compare the performance of single-step univariate forecasts produced by a set of selected statistical and regression-tree-based predictive models, using large datasets of over 2500 daily records of green market indices gathered in a ten-year timespan. The statistical models include simple exponential smoothing, Holt’s method, the ETS version of the exponential model, linear regression, weighted moving average, and autoregressive moving average (ARMA). In addition, the decision tree-based machine learning (ML) methods include the standard regression trees and two ensemble methods, namely the random forests and extreme gradient boosting (XGBoost). The forecasting results show that (i) exponential smoothing models achieve the best performance, and (ii) ensemble methods, namely XGBoost and random forests, perform better than the standard regression trees. The findings of this study will be valuable to both policymakers and investors. Policymakers can leverage these predictive models to design balanced policy interventions that support environmentally sustainable businesses while fostering continued economic growth. In parallel, investors and traders will benefit from an ease of adaptability to rapid market changes thanks to the computationally cost-effective model attributes found in this study to generate profits. Full article
(This article belongs to the Special Issue Quantitative Finance and Risk Management Research: 2nd Edition)
Show Figures

Figure 1

18 pages, 4385 KiB  
Article
Short-Term Load Forecasting in Distribution Substation Using Autoencoder and Radial Basis Function Neural Networks: A Case Study in India
by Venkataramana Veeramsetty, Prabhu Kiran Konda, Rakesh Chandra Dongari and Surender Reddy Salkuti
Computation 2025, 13(3), 75; https://doi.org/10.3390/computation13030075 - 14 Mar 2025
Viewed by 841
Abstract
Electric load forecasting is an essential task for Distribution System Operators in order to achieve proper planning, high integration of small-scale production from renewable energy sources, and to define effective marketing strategies. In this framework, machine learning and data dimensionality reduction techniques can [...] Read more.
Electric load forecasting is an essential task for Distribution System Operators in order to achieve proper planning, high integration of small-scale production from renewable energy sources, and to define effective marketing strategies. In this framework, machine learning and data dimensionality reduction techniques can be useful for building more efficient tools for electrical energy load prediction. In this paper, a machine learning model based on a combination of a radial basis function neural network and an autoencoder is used to forecast the electric load on a 33/11 kV substation located in Godishala, Warangal, India. One year of historical data on an electrical substation and weather are considered to assess the effectiveness of the proposed model. The impact of weather, day, and season status on load forecasting is also considered. The input dataset dimensionality is reduced using autoencoder to build a light-weight machine learning model to be deployed on edge devices. The proposed methodology is supported by a comparison with the state of the art based on extensive numerical simulations. Full article
Show Figures

Figure 1

35 pages, 2763 KiB  
Article
Integrated Scheduling of Stacker and Reclaimer in Dry Bulk Terminals: A Hybrid Genetic Algorithm
by Imane Torbi, Imad Belassiria, Mohamed Mazouzi and Sanaa Aidi
Computation 2025, 13(3), 74; https://doi.org/10.3390/computation13030074 - 13 Mar 2025
Viewed by 306
Abstract
Competitive dynamics in dry bulk terminals necessitate efficient planning and scheduling to optimize operations. This study focuses on the productivity of stackers and reclaimers by developing a mathematical optimization model to enhance scheduling efficiency. A mixed-integer linear programming (MILP) model was formulated to [...] Read more.
Competitive dynamics in dry bulk terminals necessitate efficient planning and scheduling to optimize operations. This study focuses on the productivity of stackers and reclaimers by developing a mathematical optimization model to enhance scheduling efficiency. A mixed-integer linear programming (MILP) model was formulated to minimize the maximum completion time (makespan) of operations while ensuring smooth material flow and resource utilization. Given the computational complexity of real-world scenarios, a novel hybrid genetic algorithm (GA) was proposed. This algorithm integrates tabu search to generate a high-quality initial population size and employs innovative chromosome designs that respect operational constraints, such as equipment availability, material flow continuity, and sequencing restrictions. This hybrid approach balances exploration and exploitation, improving solution convergence and robustness. Computational experiments using real data from a Moroccan dry bulk terminal validated the algorithm’s efficiency and effectiveness. Performance indicators such as makespan reduction, equipment utilization, and computational efficiency were analyzed. The results demonstrate that the hybrid GA significantly reduced processing times and improved resource efficiency compared to conventional methods. Additionally, the algorithm showed scalability across different operational scenarios, confirming its adaptability to dynamic terminal conditions. These findings highlight the potential of advanced optimization techniques to enhance decision making and improve operational productivity in dry bulk terminals. Full article
Show Figures

Figure 1

17 pages, 5071 KiB  
Article
Non-Hydrostatic Galerkin Model with Weighted Average Pressure Profile
by Lucas Calvo, Diana De Padova and Michele Mossa
Computation 2025, 13(3), 73; https://doi.org/10.3390/computation13030073 - 13 Mar 2025
Viewed by 319
Abstract
This work develops a novel two-dimensional, depth-integrated, non-hydrostatic model for wave propagation simulation using a weighted average non-hydrostatic pressure profile. The model is constructed by modifying an existing non-hydrostatic discontinuous/continuous Galerkin finite-element model with a linear, vertical, non-hydrostatic pressure profile. Using a weighted [...] Read more.
This work develops a novel two-dimensional, depth-integrated, non-hydrostatic model for wave propagation simulation using a weighted average non-hydrostatic pressure profile. The model is constructed by modifying an existing non-hydrostatic discontinuous/continuous Galerkin finite-element model with a linear, vertical, non-hydrostatic pressure profile. Using a weighted average linear/quadratic non-hydrostatic pressure profile has been shown to increase the performance of earlier models. The results suggest that implementing a weighted average non-hydrostatic pressure profile, in conjunction with a calculated or optimized Ө weight parameter, improves the dispersion characteristics of depth-integrated, non-hydrostatic models in shallow and intermediate water depths. A series of analytical solutions and data from previous laboratory experiments verify and validate the model. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

25 pages, 1521 KiB  
Article
Short-Term Predictions of Global Horizontal Irradiance Using Recurrent Neural Networks, Support Vector Regression, Gradient Boosting Random Forest and Advanced Stacking Ensemble Approaches
by Fhulufhelo Walter Mugware, Thakhani Ravele and Caston Sigauke
Computation 2025, 13(3), 72; https://doi.org/10.3390/computation13030072 - 13 Mar 2025
Viewed by 508
Abstract
In today’s world, where sustainable energy is essential for the planet’s survival, accurate solar energy forecasting is crucial. This study focused on predicting short-term Global Horizontal Irradiance (GHI) using minute-averaged data from the Southern African Universities Radiometric Network (SAURAN) at the Univen Radiometric [...] Read more.
In today’s world, where sustainable energy is essential for the planet’s survival, accurate solar energy forecasting is crucial. This study focused on predicting short-term Global Horizontal Irradiance (GHI) using minute-averaged data from the Southern African Universities Radiometric Network (SAURAN) at the Univen Radiometric Station in South Africa. Various techniques were evaluated for their predictive accuracy, including Recurrent Neural Networks (RNNs), Support Vector Regression (SVR), Gradient Boosting (GB), Random Forest (RF), Stacking Ensemble, and Double Nested Stacking (DNS). The results indicated that RNN performed the best in terms of mean absolute error (MAE) and root mean squared error (RMSE) among the machine learning models. However, Stacking Ensemble with XGBoost as the meta-model outperformed all individual models, improving accuracy by 67.06% in MAE and 22.28% in RMSE. DNS further enhanced accuracy, achieving a 93.05% reduction in MAE and an 88.54% reduction in RMSE compared to the best machine learning model, as well as a 78.89% decrease in MAE and an 85.27% decrease in RMSE compared to the best single stacking model. Furthermore, experimenting with the order of the DNS meta-model revealed that using RF as the first-level meta-model followed by XGBoost yielded the highest accuracy, showing a 47.39% decrease in MAE and a 61.35% decrease in RMSE compared to DNS with RF at both levels. These findings underscore advanced stacking techniques’ potential to improve GHI forecasting significantly. Full article
Show Figures

Figure 1

21 pages, 5048 KiB  
Article
Numerical Methodology for Enhancing Heat Transfer in a Channel with Arc-Vane Baffles
by Piphatpong Thapmanee, Arnut Phila, Khwanchit Wongcharee, Naoki Maruyama, Masafumi Hirota, Varesa Chuwattanakul and Smith Eiamsa-ard
Computation 2025, 13(3), 71; https://doi.org/10.3390/computation13030071 - 12 Mar 2025
Viewed by 364
Abstract
This study numerically investigates flow and heat transfer in a channel with arc-vane baffles at various radius-to-channel high ratios (r/H = 0.125, 0.25, 0.375, and 0.5) for Reynolds numbers between 6000 and 24,000, focusing on solar air-heater applications. The calculations [...] Read more.
This study numerically investigates flow and heat transfer in a channel with arc-vane baffles at various radius-to-channel high ratios (r/H = 0.125, 0.25, 0.375, and 0.5) for Reynolds numbers between 6000 and 24,000, focusing on solar air-heater applications. The calculations utilize the finite volume method, and the SIMPLE algorithm is executed with the QUICK scheme. For the analysis of turbulent flow, the finite volume method with the Renormalization Group (RNG) k-ε turbulence model was used. The results show that arc-vane baffles create double vortices along the axial direction, promoting flow reattachment on the heated surface and enhancing heat transfer. Baffles with smaller r/H ratios strengthen flow reattachment, reduce dead zones, and improve fluid contact with the heat transfer surface. The baffles with the smallest r/H ratio achieve a Nusselt number ratio (Nu/Nus) of 4.91 at Re = 6000. As r/H increases, the friction factor (f) and friction factor ratio (f/fs) rise due to increased baffle curvature and surface area. The highest thermal performance factor (TPF) of 2.28 occurs at r/H = 0.125 and Re = 6000, reflecting an optimal balance of heat transfer and friction losses. Arc-vane baffles with a r/H ratio of 0.125 yield a TPF exceeding unity, indicating potential energy savings. These findings provide valuable insights for optimizing baffle designs to enhance thermal performance in practical applications. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

23 pages, 466 KiB  
Article
COVID-19 Data Analysis: The Impact of Missing Data Imputation on Supervised Learning Model Performance
by Jorge Daniel Mello-Román and Adrián Martínez-Amarilla
Computation 2025, 13(3), 70; https://doi.org/10.3390/computation13030070 - 8 Mar 2025
Viewed by 1637
Abstract
The global COVID-19 pandemic has generated extensive datasets, providing opportunities to apply machine learning for diagnostic purposes. This study evaluates the performance of five supervised learning models—Random Forests (RFs), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Logistic Regression (LR), and Decision Trees [...] Read more.
The global COVID-19 pandemic has generated extensive datasets, providing opportunities to apply machine learning for diagnostic purposes. This study evaluates the performance of five supervised learning models—Random Forests (RFs), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Logistic Regression (LR), and Decision Trees (DTs)—on a hospital-based dataset from the Concepción Department in Paraguay. To address missing data, four imputation methods (Predictive Mean Matching via MICE, RF-based imputation, K-Nearest Neighbor, and XGBoost-based imputation) were tested. Model performance was compared using metrics such as accuracy, AUC, F1-score, and MCC across five levels of missingness. Overall, RF consistently achieved high accuracy and AUC at the highest missingness level, underscoring its robustness. In contrast, SVM often exhibited a trade-off between specificity and sensitivity. ANN and DT showed moderate resilience, yet were more prone to performance shifts under certain imputation approaches. These findings highlight RF’s adaptability to different imputation strategies, as well as the importance of selecting methods that minimize sensitivity–specificity trade-offs. By comparing multiple imputation techniques and supervised models, this study provides practical insights for handling missing medical data in resource-constrained settings and underscores the value of robust ensemble methods for reliable COVID-19 diagnostics. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

22 pages, 5968 KiB  
Article
The Optimization of PID Controller and Color Filter Parameters with a Genetic Algorithm for Pineapple Tracking Using an ROS2 and MicroROS-Based Robotic Head
by Carolina Maldonado-Mendez, Sergio Fabian Ruiz-Paz, Isaac Machorro-Cano, Antonio Marin-Hernandez and Sergio Hernandez-Mendez
Computation 2025, 13(3), 69; https://doi.org/10.3390/computation13030069 - 7 Mar 2025
Viewed by 456
Abstract
This work proposes a vision system mounted on the head of an omnidirectional robot to track pineapples and maintain them at the center of its field of view. The robot head is equipped with a pan–tilt unit that facilitates dynamic adjustments. The system [...] Read more.
This work proposes a vision system mounted on the head of an omnidirectional robot to track pineapples and maintain them at the center of its field of view. The robot head is equipped with a pan–tilt unit that facilitates dynamic adjustments. The system architecture, implemented in Robot Operating System 2 (ROS2), performs the following tasks: it captures images from a webcam embedded in the robot head, segments the object of interest based on color, and computes its centroid. If the centroid deviates from the center of the image plane, a proportional–integral–derivative (PID) controller adjusts the pan–tilt unit to reposition the object at the center, enabling continuous tracking. A multivariate Gaussian function is employed to segment objects with complex color patterns, such as the body of a pineapple. The parameters of both the PID controller and the multivariate Gaussian filter are optimized using a genetic algorithm. The PID controller receives as input the (x, y) positions of the pan–tilt unit, obtained via an embedded board and MicroROS, and generates control signals for the servomotors that drive the pan–tilt mechanism. The experimental results demonstrate that the robot successfully tracks a moving pineapple. Additionally, the color segmentation filter can be further optimized to detect other textured fruits, such as soursop and melon. This research contributes to the advancement of smart agriculture, particularly for fruit crops with rough textures and complex color patterns. Full article
Show Figures

Figure 1

24 pages, 1543 KiB  
Article
Stochastic Up-Scaling of Discrete Fine-Scale Models Using Bayesian Updating
by Muhammad Sadiq Sarfaraz, Bojana V. Rosić and Hermann G. Matthies
Computation 2025, 13(3), 68; https://doi.org/10.3390/computation13030068 - 7 Mar 2025
Viewed by 500
Abstract
In this work, we present an up-scaling framework in a multi-scale setting to calibrate a stochastic material model. In particular with regard to application of the proposed method, we employ Bayesian updating to identify the probability distribution of continuum-based coarse-scale model parameters from [...] Read more.
In this work, we present an up-scaling framework in a multi-scale setting to calibrate a stochastic material model. In particular with regard to application of the proposed method, we employ Bayesian updating to identify the probability distribution of continuum-based coarse-scale model parameters from fine-scale measurements, which is discrete and also inherently random (aleatory uncertainty) in nature. Owing to the completely dissimilar nature of models for the involved scales, the energy is used as the essential medium (i.e., the predictions of the coarse-scale model and measurements from the fine-scale model) of communication between them. This task is realized computationally using a generalized version of the Kalman filter, employing a functional approximation of the involved parameters. The approximations are obtained in a non-intrusive manner and are discussed in detail especially for the fine-scale measurements. The demonstrated numerical examples show the utility and generality of the presented approach in terms of obtaining calibrated coarse-scale models as reasonably accurate approximations of fine-scale ones and greater freedom to select widely different models on both scales, respectively. Full article
(This article belongs to the Special Issue Synergy between Multiphysics/Multiscale Modeling and Machine Learning)
Show Figures

Figure 1

18 pages, 1850 KiB  
Article
MySTOCKS: Multi-Modal Yield eSTimation System of in-prOmotion Commercial Key-ProductS
by Cettina Giaconia and Aziz Chamas
Computation 2025, 13(3), 67; https://doi.org/10.3390/computation13030067 - 6 Mar 2025
Viewed by 340
Abstract
In recent years, Out-of-Stock (OOS) occurrences have posed a persistent challenge for both retailers and manufacturers. In the context of grocery retail, an OOS event represents a situation where customers are unable to locate a specific product when attempting to make a purchase. [...] Read more.
In recent years, Out-of-Stock (OOS) occurrences have posed a persistent challenge for both retailers and manufacturers. In the context of grocery retail, an OOS event represents a situation where customers are unable to locate a specific product when attempting to make a purchase. This study analyzes the issue from the manufacturer’s perspective. The proposed system, named the “Multi-modal yield eSTimation System of in-prOmotion Commercial Key-ProductS” (MySTOCKS) platform, is a sophisticated multi-modal yield estimation system designed to optimize inventory forecasting for the agrifood and large-scale retail sectors, particularly during promotional periods. MySTOCKS addresses the complexities of inventory management in settings where Out-of-Stock (OOS) and Surplus-of-Stock (SOS) situations frequently arise, offering predictive insights into final stock levels across defined forecasting intervals to support sustainable resource management. Unlike traditional approaches, MySTOCKS leverages an advanced deep learning framework that incorporates transformer models with self-attention mechanisms and domain adaptation capabilities, enabling accurate temporal and spatial modeling tailored to the dynamic requirements of the agrifood supply chain. The system includes two distinct forecasting modules: TR1, designed for standard stock-level estimation, and TR2, which focuses on elevated demand periods during promotions. Additionally, MySTOCKS integrates Elastic Weight Consolidation (EWC) to mitigate the effects of catastrophic forgetting, thus enhancing predictive accuracy amidst changing data patterns. Preliminary results indicate high system performance, with test accuracy, sensitivity, and specificity rates approximating 93.8%. This paper provides an in-depth examination of the MySTOCKS platform’s modular structure, data-processing workflow, and its broader implications for sustainable and economically efficient inventory management within agrifood and large-scale retail environments. Full article
Show Figures

Figure 1

20 pages, 2026 KiB  
Article
Design of Periodic Neural Networks for Computational Investigations of Nonlinear Hepatitis C Virus Model Under Boozing
by Abdul Mannan, Jamshaid Ul Rahman, Quaid Iqbal and Rubiqa Zulfiqar
Computation 2025, 13(3), 66; https://doi.org/10.3390/computation13030066 - 6 Mar 2025
Cited by 1 | Viewed by 385
Abstract
The computational investigation of nonlinear mathematical models presents significant challenges due to their complex dynamics. This paper presents a computational study of a nonlinear hepatitis C virus model that accounts for the influence of alcohol consumption on disease progression. We employ periodic neural [...] Read more.
The computational investigation of nonlinear mathematical models presents significant challenges due to their complex dynamics. This paper presents a computational study of a nonlinear hepatitis C virus model that accounts for the influence of alcohol consumption on disease progression. We employ periodic neural networks, optimized using a hybrid genetic algorithm and the interior-point algorithm, to solve a system of six coupled nonlinear differential equations representing hepatitis C virus dynamics. This model has not previously been solved using the proposed technique, marking a novel approach. The proposed method’s performance is evaluated by comparing the numerical solutions with those obtained from traditional numerical methods. Statistical measures such as mean absolute error, root mean square error, and Theil’s inequality coefficient are used to assess the accuracy and reliability of the proposed approach. The weight vector distributions illustrate how the network adapts to capture the complex nonlinear behavior of the disease. A comparative analysis with established numerical methods is provided, where performance metrics are illustrated using a range of graphical tools, including box plots, histograms, and loss curves. The absolute error values, ranging approximately from 106 to 1010, demonstrate the precision, convergence, and robustness of the proposed approach, highlighting its potential applicability to other nonlinear epidemiological models. Full article
Show Figures

Figure 1

14 pages, 3732 KiB  
Article
Computational Analysis of Pipe Roughness Influence on Slurry Flow Dynamics
by Tanuj Joshi, Om Parkash, Ralph Kristoffer B. Gallegos and Gopal Krishan
Computation 2025, 13(3), 65; https://doi.org/10.3390/computation13030065 - 4 Mar 2025
Viewed by 471
Abstract
Slurry transportation is an essential process in numerous industrial applications, widely studied for its efficiency in material conveyance. Despite substantial research, the impact of pipe wall roughness on critical metrics such as pressure drop, specific energy consumption (SEC), and the Nusselt number remains [...] Read more.
Slurry transportation is an essential process in numerous industrial applications, widely studied for its efficiency in material conveyance. Despite substantial research, the impact of pipe wall roughness on critical metrics such as pressure drop, specific energy consumption (SEC), and the Nusselt number remains relatively underexplored. This study provides a detailed analysis using a three-dimensional computational model of a slurry pipeline, with a 0.0549 m diameter and 3.8 m length. The model employs an Eulerian multiphase approach coupled with the RNG k-ε turbulence model, assessing slurry concentrations Cw = 40–60% (by weight). Simulations were conducted at flow velocities Vm = 1–5 m/s, with pipe roughness (Rh) ranging between 10 and 50 µm. Computational findings indicate that both pressure drop and SEC increase proportionally with roughness height, Vm, and Cw. Interestingly, the Nusselt number appears unaffected by roughness height, although it rises corresponds to Vm, and Cw. These insights offer a deeper understanding of slurry pipeline dynamics, informing strategies to enhance operational efficiency and performance across various industrial contexts. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

23 pages, 9839 KiB  
Article
FPGA Implementation of Synergetic Controller-Based MPPT Algorithm for a Standalone PV System
by Abdul-Basset A. Al-Hussein, Fadhil Rahma Tahir and Viet-Thanh Pham
Computation 2025, 13(3), 64; https://doi.org/10.3390/computation13030064 - 3 Mar 2025
Viewed by 747
Abstract
Photovoltaic (PV) energy is gaining traction due to its direct conversion of sunlight to electricity without harming the environment. It is simple to install, adaptable in size, and has low operational costs. The power output of PV modules varies with solar radiation and [...] Read more.
Photovoltaic (PV) energy is gaining traction due to its direct conversion of sunlight to electricity without harming the environment. It is simple to install, adaptable in size, and has low operational costs. The power output of PV modules varies with solar radiation and cell temperature. To optimize system efficiency, it is crucial to track the PV array’s maximum power point. This paper presents a novel fixed-point FPGA design of a nonlinear maximum power point tracking (MPPT) controller based on synergetic control theory for driving autonomously standalone photovoltaic systems. The proposed solution addresses the chattering issue associated with the sliding mode controller by introducing a new strategy that generates a continuous control law rather than a switching term. Because it requires a lower sample rate when switching to the invariant manifold, its controlled switching frequency makes it better suited for digital applications. The suggested algorithm is first emulated to evaluate its performance, robustness, and efficacy under a standard benchmarked MPPT efficiency (ηMPPT) calculation regime. FPGA has been used for its capability to handle high-speed control tasks more efficiently than traditional micro-controller-based systems. The high-speed response is critical for applications where rapid adaptation to changing conditions, such as fluctuating solar irradiance and temperature levels, is necessary. To validate the effectiveness of the implemented synergetic controller, the system responses under variant meteorological conditions have been analyzed. The results reveal that the synergetic control algorithm provides smooth and precise MPPT. Full article
(This article belongs to the Special Issue Nonlinear System Modelling and Control)
Show Figures

Figure 1

24 pages, 12859 KiB  
Article
A DNN-Based Surrogate Constitutive Equation for Geometrically Exact Thin-Walled Rod Members
by Marcos Pires Kassab, Eduardo de Morais Barreto Campello and Adnan Ibrahimbegovic
Computation 2025, 13(3), 63; https://doi.org/10.3390/computation13030063 - 3 Mar 2025
Viewed by 501
Abstract
Kinematically exact rod models were a major breakthrough to evaluate complex frame structures undergoing large displacements and the associated buckling modes. However, they are limited to the analysis of global effects, since the underlying kinematical assumptions typically take into account only cross-sectional rigid-body [...] Read more.
Kinematically exact rod models were a major breakthrough to evaluate complex frame structures undergoing large displacements and the associated buckling modes. However, they are limited to the analysis of global effects, since the underlying kinematical assumptions typically take into account only cross-sectional rigid-body motion and ocasionally torsional warping. For thin-walled members, local effects can be notably important in the overall behavior of the rod. In the present work, high-fidelity simulations using elastic 3D-solid finite elements are employed to provide input data to train a Deep Neural Newtork-(DNN) to act as a surrogate model of the rod’s constitutive equation. It is capable of indirectly representing local effects such as web/flange bending and buckling at a stress-resultant level, yet using only usual rod degrees of freedom as inputs, given that it is trained to predict the internal energy as a function of generalized rod strains. A series of theoretical constraints for the surrogate model is elaborated, and a practical case is studied, from data generation to the DNN training. The outcome is a successfully trained model for a particular choice of cross-section and elastic material, that is ready to be employed in a full rod/frame simulation. Full article
(This article belongs to the Special Issue Synergy between Multiphysics/Multiscale Modeling and Machine Learning)
Show Figures

Figure 1

18 pages, 5664 KiB  
Article
Magnetohydrodynamic Blood-Carbon Nanotube Flow and Heat Transfer Control via Carbon Nanotube Geometry and Nanofluid Properties for Hyperthermia Treatment
by Nickolas D. Polychronopoulos, Evangelos Karvelas, Lefteris Benos, Thanasis D. Papathanasiou and Ioannis Sarris
Computation 2025, 13(3), 62; https://doi.org/10.3390/computation13030062 - 3 Mar 2025
Viewed by 541
Abstract
Hyperthermia is a promising medical treatment that uses controlled heat to target and destroy cancer cells while minimizing damage to the surrounding healthy tissue. Unlike conventional methods, it offers reduced risks of infection and shorter recovery periods. This study focuses on the integration [...] Read more.
Hyperthermia is a promising medical treatment that uses controlled heat to target and destroy cancer cells while minimizing damage to the surrounding healthy tissue. Unlike conventional methods, it offers reduced risks of infection and shorter recovery periods. This study focuses on the integration of carbon nanotubes (CNTs) within the blood to enable precise heat transfer to tumors. The central idea is that by adjusting the concentration, shape, and size of CNTs, as well as the strength of an external magnetic field, heat transfer can be controlled for targeted treatment. A theoretical model is developed to analyze laminar natural convection within a simplified rectangular porous enclosure resembling a tumor, considering the composition of blood, and the geometric characteristics of CNTs, including the interfacial nanolayer thickness. Using an asymptotic expansion method, ordinary differential equations for mass, momentum, and energy balances are derived and solved. Results show that increasing CNT concentration decelerates fluid flow and reduces heat transfer efficiency, while elongated CNTs and thicker nanolayers enhance conduction over convection, to the detriment of heat transfer. Finally, increased tissue permeability—characteristic of cancerous tumors—significantly impacts heat transfer. In conclusion, although the model simplifies real tumor geometries and treatment conditions, it provides valuable theoretical insights into hyperthermia and nanofluid applications for cancer therapy. Full article
(This article belongs to the Special Issue Post-Modern Computational Fluid Dynamics)
Show Figures

Figure 1

22 pages, 1225 KiB  
Article
A Hybrid Physics-Informed and Data-Driven Approach for Predicting the Fatigue Life of Concrete Using an Energy-Based Fatigue Model and Machine Learning
by Himanshu Rana and Adnan Ibrahimbegovic
Computation 2025, 13(3), 61; https://doi.org/10.3390/computation13030061 - 2 Mar 2025
Viewed by 875
Abstract
Fatigue has always been one of the major causes of structural failure, where repeated loading and unloading cycles reduce the fracture energy of the material, causing it to fail at stresses lower than its monotonic strength. However, predicting fatigue life is a highly [...] Read more.
Fatigue has always been one of the major causes of structural failure, where repeated loading and unloading cycles reduce the fracture energy of the material, causing it to fail at stresses lower than its monotonic strength. However, predicting fatigue life is a highly challenging task and, in this context, the present study proposes a fundamentally new hybrid physics-informed and data-driven approach. Firstly, an energy-based fatigue model is developed to simulate the behavior of concrete under compressive cyclic fatigue loading. The data generated from these numerical simulations are then utilized to train machine learning (ML) models. The stress–strain curve and S-N curve of concrete under compression, obtained from the energy-based model, are validated against experimental data. For the ML models, two different algorithms are used as follows: k-Nearest Neighbors (KNN) and Deep Neural Networks (DNN), where a total of 1962 data instances generated from numerical simulations are used for the training and testing of the ML models. Furthermore, the performance of the ML models is evaluated for out-of-range inputs, where the DNN model with three hidden layers (a complex model with 128, 64, and 32 neurons) provides the best predictions, with only a 0.6% overall error. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

13 pages, 4639 KiB  
Article
A Comparative Study on Fuzzy Logic-Based Liquid Level Control Systems with Integrated Industrial Communication Technology
by Hasan Mhd Nazha, Ali Mahmoud Youssef, Mohamad Ayham Darwich, Their Ahmad Ibrahim and Hala Essa Homsieh
Computation 2025, 13(3), 60; https://doi.org/10.3390/computation13030060 - 2 Mar 2025
Viewed by 904
Abstract
This study presents an advanced control system for liquid level regulation, comparing a traditional proportional-integral-derivative (PID) controller with a fuzzy logic controller. The system integrates a real-time monitoring and control interface, allowing flexible adjustments for research and training applications. Unlike the PID controller, [...] Read more.
This study presents an advanced control system for liquid level regulation, comparing a traditional proportional-integral-derivative (PID) controller with a fuzzy logic controller. The system integrates a real-time monitoring and control interface, allowing flexible adjustments for research and training applications. Unlike the PID controller, which relies on predefined tuning parameters, the fuzzy logic controller dynamically adjusts control actions based on system behavior, making it more suitable for processes with non-linear dynamics. The experimental results highlight the superior performance of the fuzzy logic controller over the PID controller. Specifically, the fuzzy logic controller achieved a 21% reduction in maximum overshoot, a 62% decrease in peak time, and an 83% reduction in settling time. These improvements demonstrate its ability to handle process fluctuations more efficiently and respond rapidly to changes in liquid levels. By offering enhanced stability and adaptability, the fuzzy logic controller presents a viable alternative for liquid level control applications. Furthermore, this research contributes to the development of flexible and high-performance control solutions that can be implemented in both industrial and educational settings. The proposed system serves as a cost-effective platform for hands-on learning in control system design, reinforcing contemporary engineering education and advancing intelligent control strategies for industrial automation. Full article
Show Figures

Figure 1

14 pages, 3884 KiB  
Article
Exploration of Sign Language Recognition Methods Based on Improved YOLOv5s
by Xiaohua Li, Chaiyan Jettanasen and Pathomthat Chiradeja
Computation 2025, 13(3), 59; https://doi.org/10.3390/computation13030059 - 24 Feb 2025
Viewed by 365
Abstract
Gesture is a natural and intuitive means of interpersonal communication. Sign language recognition has become a hot topic in scientific research, holding significant importance and research value in fields such as deep learning, human–computer interaction, and pattern recognition. The sign language recognition process [...] Read more.
Gesture is a natural and intuitive means of interpersonal communication. Sign language recognition has become a hot topic in scientific research, holding significant importance and research value in fields such as deep learning, human–computer interaction, and pattern recognition. The sign language recognition process needs to ensure real-time performance and ease of deployment. Based on these two requirements, this paper proposes an improved YOLOv5s-based sign language recognition algorithm. Firstly, the lightweight concept from ShuffleNetV2 was applied to achieve lightweight characteristics and improve the model’s deployability. The specific improvements are as follows: The algorithm achieved model size reduction by removing the Focus layer, using the ShuffleNetv2 algorithm, and then channel pruning YOLOv5 at the head of the neck layer. All the convolutional layers and the cross-stage partial bottleneck layer with three convolutional layers in the backbone network were replaced with ShuffleBlock, the spatial pyramid pooling layer and a subsequent cross-stage partial bottleneck layer structure with three convolutional layers were removed, and the cross-stage partial bottleneck layer module with three convolutional layers in the detection header section was replaced with a depth-separable convolutional module. Experimental results show that the parameters of the improved YOLOv5 algorithm decreased from 7.2 M to 0.72 M, and the inference speed decreased from 3.3 ms to 1.1 ms. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop