Journal Description
Computation
Computation
is a peer-reviewed journal of computational science and engineering published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), CAPlus / SciFinder, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q2 (Mathematics, Interdisciplinary Applications) / CiteScore - Q1 (Applied Mathematics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.7 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
1.9 (2024);
5-Year Impact Factor:
1.9 (2024)
Latest Articles
Mathematical Optimization in Machine Learning for Computational Chemistry
Computation 2025, 13(7), 169; https://doi.org/10.3390/computation13070169 - 11 Jul 2025
Abstract
Machine learning (ML) is transforming computational chemistry by accelerating molecular simulations, property prediction, and inverse design. Central to this transformation is mathematical optimization, which underpins nearly every stage of model development, from training neural networks and tuning hyperparameters to navigating chemical space for
[...] Read more.
Machine learning (ML) is transforming computational chemistry by accelerating molecular simulations, property prediction, and inverse design. Central to this transformation is mathematical optimization, which underpins nearly every stage of model development, from training neural networks and tuning hyperparameters to navigating chemical space for molecular discovery. This review presents a structured overview of optimization techniques used in ML for computational chemistry, including gradient-based methods (e.g., SGD and Adam), probabilistic approaches (e.g., Monte Carlo sampling and Bayesian optimization), and spectral methods. We classify optimization targets into model parameter optimization, hyperparameter selection, and molecular optimization and analyze their application across supervised, unsupervised, and reinforcement learning frameworks. Additionally, we examine key challenges such as data scarcity, limited generalization, and computational cost, outlining how mathematical strategies like active learning, meta-learning, and hybrid physics-informed models can address these issues. By bridging optimization methodology with domain-specific challenges, this review highlights how tailored optimization strategies enhance the accuracy, efficiency, and scalability of ML models in computational chemistry.
Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
Open AccessArticle
Simultaneous Multi-Objective and Topology Optimization: Effect of Mesh Refinement and Number of Iterations on Computational Cost
by
Daniel Miler, Matija Hoić, Rudolf Tomić, Andrej Jokić and Robert Mašović
Computation 2025, 13(7), 168; https://doi.org/10.3390/computation13070168 - 11 Jul 2025
Abstract
In this study, a multi-objective optimization procedure with embedded topology optimization was presented. The procedure simultaneously optimizes the spatial arrangement and topology of bodies in a multi-body system. The multi-objective algorithm determines the locations of supports, joints, active loads, reactions, and load magnitudes,
[...] Read more.
In this study, a multi-objective optimization procedure with embedded topology optimization was presented. The procedure simultaneously optimizes the spatial arrangement and topology of bodies in a multi-body system. The multi-objective algorithm determines the locations of supports, joints, active loads, reactions, and load magnitudes, which serve as inputs for the topology optimization of each body. The multi-objective algorithm dynamically adjusts domain size, support locations, and load magnitudes during optimization. Due to repeated topology optimization calls within the genetic algorithm, the computational cost is significant. To address this, two reduction strategies are proposed: (I) using a coarser mesh and (II) reducing the number of iterations during the initial generations. As optimization progresses, Strategy I gradually refines the mesh, while Strategy II increases the maximum allowable iteration count. The effectiveness of both strategies is evaluated against a baseline (Reference) without reductions. By the 25th generation, all approaches achieve similar hypervolume values (Reference: 2.181; I: 2.112; II: 2.133). The computation time is substantially reduced (Reference: 42,226 s; I: 16,814 s; II: 21,674 s), demonstrating that both strategies effectively accelerate optimization without compromising solution quality.
Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Useful Results for the Qualitative Analysis of Generalized Hattaf Mixed Fractional Differential Equations with Applications to Medicine
by
Khalid Hattaf
Computation 2025, 13(7), 167; https://doi.org/10.3390/computation13070167 - 10 Jul 2025
Abstract
Most solutions of fractional differential equations (FDEs) that model real-world phenomena in various fields of science, industry, and engineering are complex and cannot be solved analytically. This paper mainly aims to present some useful results for studying the qualitative properties of solutions of
[...] Read more.
Most solutions of fractional differential equations (FDEs) that model real-world phenomena in various fields of science, industry, and engineering are complex and cannot be solved analytically. This paper mainly aims to present some useful results for studying the qualitative properties of solutions of FDEs involving the new generalized Hattaf mixed (GHM) fractional derivative, which encompasses many types of fractional operators with both singular and non-singular kernels. In addition, this study also aims to unify and generalize existing results under a broader operator. Furthermore, the obtained results are applied to some linear systems arising from medicine.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Figure 1
Open AccessArticle
Some Secret Sharing Based on Hyperplanes
by
Guohui Wang and Yucheng Chen
Computation 2025, 13(7), 166; https://doi.org/10.3390/computation13070166 - 10 Jul 2025
Abstract
The secret sharing schemes (SSS) are widely used in secure multi-party computing and distributed computing, and the access structure is the key to constructing secret sharing schemes. In this paper, we propose a method for constructing access structures based on hyperplane combinatorial structures
[...] Read more.
The secret sharing schemes (SSS) are widely used in secure multi-party computing and distributed computing, and the access structure is the key to constructing secret sharing schemes. In this paper, we propose a method for constructing access structures based on hyperplane combinatorial structures over finite fields. According to the given access structure, the corresponding secret sharing scheme that can identify cheaters is given. This scheme enables the secret to be correctly restored if the cheater does not exceed the threshold, and the cheating behavior can be detected and located.
Full article
Open AccessArticle
A Segmented Linear Regression Study of Seasonal Profiles of COVID-19 Deaths in Italy: September 2021–September 2024
by
Marco Roccetti and Eugenio Maria De Rosa
Computation 2025, 13(7), 165; https://doi.org/10.3390/computation13070165 - 9 Jul 2025
Abstract
►▼
Show Figures
Using a segmented linear regression model, we examined the seasonal profiles of weekly COVID-19 deaths data in Italy over a three-year-long period during which the SARS-CoV-2 Omicron and post-Omicron variants were predominant (September 2021–September 2024). Comparing the slopes of the regression segments, we
[...] Read more.
Using a segmented linear regression model, we examined the seasonal profiles of weekly COVID-19 deaths data in Italy over a three-year-long period during which the SARS-CoV-2 Omicron and post-Omicron variants were predominant (September 2021–September 2024). Comparing the slopes of the regression segments, we were able to discuss the variation in steepness of the Italian COVID-19 mortality trend, identifying the corresponding growth/decline profile for each considered season. Our findings show that, although the COVID-19 weekly death mortality has been in a declining trend in Italy since the end of 2021 until the end of 2024, there have been increasing alterations in the COVID-19 deaths for all winters and summers of that period. These increasing mortality variations were more pronounced in winters than in summers, with an average progressive increase in the number of COVID-19 deaths, with each new week, of 55.75 and 22.90, in winters and in summers, respectively. We found that COVID-19 deaths were, instead, less frequent in the intermediate periods between winters and summers, with an average decrease of −38.01 COVID-19 deaths for each new week. Our segmented regression model has fitted well the observed COVID-19 deaths, as confirmed by the average value of the determination coefficients: 0.74, 0.63 and 0.70, respectively, for winters, summers and intermediate periods. In conclusion, favored by a general declining COVID-19 mortality trend in Italy in the period of interest, transient rises of the mortality have occurred both in winters and in summers, but received little attention because they have always been compensated by consistent downward drifts occurring during the intermediate periods between winters and summers.
Full article

Figure 1
Open AccessArticle
Lattice Boltzmann Framework for Multiphase Flows by Eulerian–Eulerian Navier–Stokes Equations
by
Matteo Maria Piredda and Pietro Asinari
Computation 2025, 13(7), 164; https://doi.org/10.3390/computation13070164 - 9 Jul 2025
Abstract
Although the lattice Boltzmann method (LBM) is relatively straightforward, it demands a well-crafted framework to handle the complex partial differential equations involved in multiphase flow simulations. For the first time to our knowledge, this work proposes a novel LBM framework to solve Eulerian–Eulerian
[...] Read more.
Although the lattice Boltzmann method (LBM) is relatively straightforward, it demands a well-crafted framework to handle the complex partial differential equations involved in multiphase flow simulations. For the first time to our knowledge, this work proposes a novel LBM framework to solve Eulerian–Eulerian multiphase flow equations without any finite difference correction, including very-large-density ratios and also a realistic relation for the drag coefficient. The proposed methodology and all reported LBM formulas can be applied to any dimension. This opens a promising venue for simulating multiphase flows in large High Performance Computing (HPC) facilities and on novel parallel hardware. This LBM framework consists of six coupled LBM schemes—running on the same lattice—ensuring an efficient implementation in large codes with minimum effort. The preliminary numeral results agree in an excellent way with the reference numerical solution obtained by a traditional finite difference solver.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Numerical Simulation of Cytokinesis Hydrodynamics
by
Andriy A. Avramenko, Igor V. Shevchuk, Andrii I. Tyrinov and Iryna V. Dzevulska
Computation 2025, 13(7), 163; https://doi.org/10.3390/computation13070163 - 8 Jul 2025
Abstract
A hydrodynamic homogeneous model has been developed for the motion of mutually impenetrable viscoelastic non-Newtonian fluids taking into account surface tension forces. Based on this model, numerical simulations of cytokinesis hydrodynamics were performed. The cytoplasm is considered a non-Newtonian viscoelastic fluid. The model
[...] Read more.
A hydrodynamic homogeneous model has been developed for the motion of mutually impenetrable viscoelastic non-Newtonian fluids taking into account surface tension forces. Based on this model, numerical simulations of cytokinesis hydrodynamics were performed. The cytoplasm is considered a non-Newtonian viscoelastic fluid. The model allows for the calculation of the formation and rupture of the intercellular bridge. Results from an analytical analysis shed light on the influence of the viscoelastic fluid’s relaxation time on cytokinesis dynamics. A comparison of numerical simulation results and experimental data showed satisfactory agreement.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Figure 1
Open AccessArticle
Robust Trajectory Tracking Fault-Tolerant Control for Quadrotor UAVs Based on Adaptive Sliding Mode and Fault Estimation
by
Yukai Wu, Guobi Ling and Yaoke Shi
Computation 2025, 13(7), 162; https://doi.org/10.3390/computation13070162 - 7 Jul 2025
Abstract
This paper presents a composite disturbance-tolerant control framework for quadrotor unmanned aerial vehicles (UAVs). By constructing an enhanced dynamic model that incorporates parameter uncertainties, external disturbances, and actuator faults and considering the inherent underactuated and highly coupled characteristics of the UAV, a novel
[...] Read more.
This paper presents a composite disturbance-tolerant control framework for quadrotor unmanned aerial vehicles (UAVs). By constructing an enhanced dynamic model that incorporates parameter uncertainties, external disturbances, and actuator faults and considering the inherent underactuated and highly coupled characteristics of the UAV, a novel robust adaptive sliding mode controller (RASMC) is designed. The controller adopts a hierarchical adaptive mechanism and utilizes a dual-loop composite adaptive law to achieve the online estimation of system parameters and fault information. Using the Lyapunov method, the asymptotic stability of the closed-loop system is rigorously proven. Simulation results demonstrate that, under the combined effects of external disturbances and actuator faults, the RASMC effectively suppresses position errors (<0.05 m) and attitude errors (<0.02 radians), significantly outperforming traditional ADRC and LQR control methods. Further analysis shows that the proposed adaptive law enables the precise online estimation of aerodynamic coefficients and disturbance boundaries during actual flights, with estimation errors controlled within ±10%. Moreover, compared to ADRC and LQR, RASMC reduces the settling time by more than 50% and the tracking overshoot by over 70% while using the ( ) approximation to eliminate chattering. Prototype experiments validate the fact that the method achieves centimeter-level trajectory tracking under real uncertainties, demonstrating the superior performance and robustness of the control framework in complex flight missions.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
POTMEC: A Novel Power Optimization Technique for Mobile Edge Computing Networks
by
Tamilarasan Ananth Kumar, Rajendirane Rajmohan, Sunday Adeola Ajagbe, Oluwatobi Akinlade and Matthew Olusegun Adigun
Computation 2025, 13(7), 161; https://doi.org/10.3390/computation13070161 - 7 Jul 2025
Abstract
►▼
Show Figures
The rapid growth of ultra-dense mobile edge computing (UDEC) in 5G IoT networks has intensified energy inefficiencies and latency bottlenecks exacerbated by dynamic channel conditions and imperfect CSI in real-world deployments. This paper introduces POTMEC, a power optimization framework that combines a channel-aware
[...] Read more.
The rapid growth of ultra-dense mobile edge computing (UDEC) in 5G IoT networks has intensified energy inefficiencies and latency bottlenecks exacerbated by dynamic channel conditions and imperfect CSI in real-world deployments. This paper introduces POTMEC, a power optimization framework that combines a channel-aware adaptive power allocator using real-time SNR measurements, a MATLAB-trained RL model for joint offloading decisions and a decaying step-size algorithm guaranteeing convergence. Computational offloading is a productive technique to overcome mobile battery life issues by processing a few parts of the mobile application on the cloud. It investigated how multi-access edge computing can reduce latency and energy usage. The experiments demonstrate that the proposed model reduces transmission energy consumption by 27.5% compared to baseline methods while maintaining the latency below 15 ms in ultra-dense scenarios. The simulation results confirm a 92% accuracy in near-optimal offloading decisions under dynamic channel conditions. This work advances sustainable edge computing by enabling energy-efficient IoT deployments in 5G ultra-dense networks without compromising QoS.
Full article

Figure 1
Open AccessArticle
Successful Management of Public Health Projects Driven by AI in a BANI Environment
by
Sergiy Bushuyev, Natalia Bushuyeva, Ivan Nekrasov and Igor Chumachenko
Computation 2025, 13(7), 160; https://doi.org/10.3390/computation13070160 - 4 Jul 2025
Abstract
The management of public health projects in a BANI (brittle, anxious, non-linear, incomprehensible) environment, exemplified by the ongoing war in Ukraine, presents unprecedented challenges due to fragile systems, heightened uncertainty, and complex socio-political dynamics. This study proposes an AI-driven framework to enhance the
[...] Read more.
The management of public health projects in a BANI (brittle, anxious, non-linear, incomprehensible) environment, exemplified by the ongoing war in Ukraine, presents unprecedented challenges due to fragile systems, heightened uncertainty, and complex socio-political dynamics. This study proposes an AI-driven framework to enhance the resilience and effectiveness of public health interventions under such conditions. By integrating a coupled SEIR–Infodemic–Panicdemic Model with war-specific factors, we simulate the interplay of infectious disease spread, misinformation dissemination, and panic dynamics over 1500 days in a Ukrainian city (Kharkiv). The model incorporates time-varying parameters to account for population displacement, healthcare disruptions, and periodic war events, reflecting the evolving conflict context. Sensitivity and risk–opportunity analyses reveal that disease transmission, misinformation, and infrastructure damage significantly exacerbate epidemic peaks, while AI-enabled interventions, such as fact-checking, mental health support, and infrastructure recovery, offer substantial mitigation potential. Qualitative assessments identify technical, organisational, ethical, regulatory, and military risks, alongside opportunities for predictive analytics, automation, and equitable healthcare access. Quantitative simulations demonstrate that risks, like increased displacement, can amplify infectious peaks by up to 28.3%, whereas opportunities, like enhanced fact-checking, can reduce misinformation by 18.2%. These findings provide a roadmap for leveraging AI to navigate BANI environments, offering actionable insights for public health practitioners in Ukraine and other crisis settings. The study underscores AI’s transformative role in fostering adaptive, data-driven strategies to achieve sustainable health outcomes amidst volatility and uncertainty.
Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
An Application of Deep Learning Models for the Detection of Cocoa Pods at Different Ripening Stages: An Approach with Faster R-CNN and Mask R-CNN
by
Juan Felipe Restrepo-Arias, María José Montoya-Castaño, María Fernanda Moreno-De La Espriella and John W. Branch-Bedoya
Computation 2025, 13(7), 159; https://doi.org/10.3390/computation13070159 - 2 Jul 2025
Abstract
►▼
Show Figures
The accurate classification of cocoa pod ripeness is critical for optimizing harvest timing, improving post-harvest processing, and ensuring consistent quality in chocolate production. Traditional ripeness assessment methods are often subjective, labor-intensive, or destructive, highlighting the need for automated, non-invasive solutions. This study evaluates
[...] Read more.
The accurate classification of cocoa pod ripeness is critical for optimizing harvest timing, improving post-harvest processing, and ensuring consistent quality in chocolate production. Traditional ripeness assessment methods are often subjective, labor-intensive, or destructive, highlighting the need for automated, non-invasive solutions. This study evaluates the performance of R-CNN-based deep learning models—Faster R-CNN and Mask R-CNN—for the detection and segmentation of cocoa pods across four ripening stages (0–2 months, 2–4 months, 4–6 months, and >6 months) using the RipSetCocoaCNCH12 dataset, which is publicly accessible, comprising 4116 labeled images collected under real-world field conditions, in the context of precision agriculture. Initial experiments using pretrained weights and standard configurations on a custom COCO-format dataset yielded promising baseline results. Faster R-CNN achieved a mean average precision (mAP) of 64.15%, while Mask R-CNN reached 60.81%, with the highest per-class precision in mature pods (C4) but weaker detection in early stages (C1). To improve model robustness, the dataset was subsequently augmented and balanced, followed by targeted hyperparameter optimization for both architectures. The refined models were then benchmarked against state-of-the-art YOLOv8 networks (YOLOv8x and YOLOv8l-seg). Results showed that YOLOv8x achieved the highest mAP of 86.36%, outperforming YOLOv8l-seg (83.85%), Mask R-CNN (73.20%), and Faster R-CNN (67.75%) in overall detection accuracy. However, the R-CNN models offered valuable instance-level segmentation insights, particularly in complex backgrounds. Furthermore, a qualitative evaluation using confidence heatmaps and error analysis revealed that R-CNN architectures occasionally missed small or partially occluded pods. These findings highlight the complementary strengths of region-based and real-time detectors in precision agriculture and emphasize the need for class-specific enhancements and interpretability tools in real-world deployments.
Full article

Figure 1
Open AccessArticle
Enhancing DDoS Attacks Mitigation Using Machine Learning and Blockchain-Based Mobile Edge Computing in IoT
by
Mahmoud Chaira, Abdelkader Belhenniche and Roman Chertovskih
Computation 2025, 13(7), 158; https://doi.org/10.3390/computation13070158 - 1 Jul 2025
Abstract
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures.
[...] Read more.
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. In this paper, we present a novel approach to DDoS attack detection and mitigation that integrates state-of-the-art machine learning techniques with Blockchain-based Mobile Edge Computing (MEC) in IoT environments. Our solution leverages the decentralized and tamper-resistant nature of Blockchain technology to enable secure and efficient data collection and processing at the network edge. We evaluate multiple machine learning models, including K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Transformer architectures, and LightGBM, using the CICDDoS2019 dataset. Our results demonstrate that Transformer models achieve a superior detection accuracy of 99.78%, while RF follows closely with 99.62%, and LightGBM offers optimal efficiency for real-time detection. This integrated approach significantly enhances detection accuracy and mitigation effectiveness compared to existing methods, providing a robust and adaptive mechanism for identifying and mitigating malicious traffic patterns in IoT environments.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Numerical Modeling of Electromagnetic Modes in a Planar Stratified Medium with a Graphene Interface
by
Eugen Smolkin
Computation 2025, 13(7), 157; https://doi.org/10.3390/computation13070157 - 1 Jul 2025
Abstract
Graphene interfaces in layered dielectrics can support unique electromagnetic modes, but analyzing these modes requires robust computational techniques. This work presents a numerical method for computing TE-polarized eigenmodes in a planar stratified dielectric slab with an infinitesimally thin graphene sheet at its interface.
[...] Read more.
Graphene interfaces in layered dielectrics can support unique electromagnetic modes, but analyzing these modes requires robust computational techniques. This work presents a numerical method for computing TE-polarized eigenmodes in a planar stratified dielectric slab with an infinitesimally thin graphene sheet at its interface. The governing boundary-value problem is reformulated as coupled initial-value problems and solved via a customized shooting method, enabling accurate calculation of complex propagation constants and field profiles despite the discontinuity at the graphene layer. We demonstrate that the graphene significantly alters the modal spectrum, introducing complex leaky and surface waves with attenuation due to graphene’s conductivity. Numerical results illustrate how the layers’ inhomogeneity and the graphene’s surface conductivity influence mode confinement and loss. These findings confirm the robustness of the proposed computational approach and provide insights relevant to the design and analysis of graphene-based waveguiding devices.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Feedback-Based Validation Learning
by
Chafik Boulealam, Hajar Filali, Jamal Riffi, Adnane Mohamed Mahraz and Hamid Tairi
Computation 2025, 13(7), 156; https://doi.org/10.3390/computation13070156 - 1 Jul 2025
Abstract
►▼
Show Figures
This paper presents Feedback-Based Validation Learning (FBVL), a novel approach that transforms the role of validation datasets in deep learning. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training process. It employs real-time feedback
[...] Read more.
This paper presents Feedback-Based Validation Learning (FBVL), a novel approach that transforms the role of validation datasets in deep learning. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training process. It employs real-time feedback to optimize the model’s weight adjustments, enhancing prediction accuracy and overall model performance. Importantly, FBVL preserves the integrity of the validation process by using prediction outcomes on the validation dataset to guide training adjustments, without directly accessing the dataset. Our empirical study conducted using the Iris dataset demonstrated the effectiveness of FBVL. The Iris dataset, comprising 150 samples from three species of Iris flowers, each characterized by four features, served as an ideal testbed for demonstrating FBVL’s effectiveness. The implementation of FBVL led to substantial performance improvements, surpassing the accuracy of the previous best result by approximately 7.14% and achieving a loss reduction greater than the previous methods by approximately 49.18%. When FBVL was applied to the Multimodal EmotionLines Dataset (MELD), it showcased its wide applicability across various datasets and domains. The model achieved a test-set accuracy of 70.08%, surpassing the previous best-reported accuracy by approximately 3.12%. These remarkable results underscore FBVL’s ability to optimize performance on established datasets and its capacity to minimize loss. Using our FBVL method, we achieved a test set f1_score micro of 70.07%, which is higher than the previous best-reported value for f1_score micro of 67.59%. These results demonstrate that FBVL enhances classification accuracy and model generalization, particularly in scenarios involving small or imbalanced datasets, offering practical benefits for designing more efficient and robust neural network architectures.
Full article

Figure 1
Open AccessArticle
A Mesoscale Particle Method for Simulation of Boundary Slip Phenomena in Fluid Systems
by
Alexander E. Filippov, Mikhail Popov and Valentin L. Popov
Computation 2025, 13(7), 155; https://doi.org/10.3390/computation13070155 - 1 Jul 2025
Abstract
The present work aimed to develop a simple simulation tool to support studies of slip and other non-traditional boundary conditions in solid–fluid interactions. A mesoscale particle model (movable automata) was chosen to enable performant simulation of all relevant aspects of the system, including
[...] Read more.
The present work aimed to develop a simple simulation tool to support studies of slip and other non-traditional boundary conditions in solid–fluid interactions. A mesoscale particle model (movable automata) was chosen to enable performant simulation of all relevant aspects of the system, including phase changes, plastic deformation and flow, interface phenomena, turbulence, etc. The physical system under study comprised two atomically flat surfaces composed of particles of different sizes and separated by a model fluid formed by moving particles with repulsing cores of different sizes and long-range attraction. The resulting simulation method was tested under a variety of particle densities and conditions. It was shown that the particles can enter different (solid, liquid, and gaseous) states, depending on the effective temperature (kinetic energy caused by surface motion and random noise generated by spatially distributed Langevin sources). The local order parameter and formation of solid domains was studied for systems with varying density. Heating of the region close to one of the plates could change the density of the liquid in its proximity and resulted in chaotization (turbulence); it also dramatically changed the system configuration, the direction of the average flow, and reduced the effective friction force.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Algorithmic Analysis of Queuing System with Varying Number of Servers, Phase-Type Service Time Distribution, and Changeable Arrival Process Depending on Random Environment
by
Alexander Dudin, Olga Dudina and Sergei Dudin
Computation 2025, 13(7), 154; https://doi.org/10.3390/computation13070154 - 29 Jun 2025
Abstract
►▼
Show Figures
An -type queuing system functioning within a finite-state Markovian random environment is studied. The random environment’s state impacts the number of available servers, the underlying processes of customer arrivals and service, and the impatience rate
[...] Read more.
An -type queuing system functioning within a finite-state Markovian random environment is studied. The random environment’s state impacts the number of available servers, the underlying processes of customer arrivals and service, and the impatience rate of customers. The impact on the state space of the underlying processes of customer arrivals and of the more general, as compared to exponential, service time distribution defines the novelty of the model. The behavior of the system is described by a multidimensional Markov chain that belongs to the classes of the level-independent quasi-birth-and-death processes or asymptotically quasi-Toeplitz Markov chains, depending on whether or not the customers are absolutely patient in all states of the random environment or are impatient in at least one state of the random environment. Using the tools of the corresponding processes or chains, a stationary analysis of the system is implemented. In particular, it is shown that the system is always ergodic if customers are impatient in at least one state of the random environment. Expressions for the computation of the basic performance measures of the system are presented. Examples of their computation for the system with three states of the random environment are presented as 3-D surfaces. The results can be useful for the analysis of a variety of real-world systems with parameters that may randomly change during system operation. In particular, they can be used for optimally matching the number of active servers and the bandwidth used by the transmission channels to the current rate of arrivals, and vice versa.
Full article

Figure 1
Open AccessArticle
A New Approach to Topology Optimization with Genetic Algorithm and Parameterization Level Set Function
by
Igor Pehnec, Damir Sedlar, Ivo Marinic-Kragic and Damir Vučina
Computation 2025, 13(7), 153; https://doi.org/10.3390/computation13070153 - 26 Jun 2025
Abstract
In this paper, a new approach to topology optimization using the parameterized level set function and genetic algorithm optimization methods is presented. The impact of a number of parameters describing the level set function in the representation of the model was examined. Using
[...] Read more.
In this paper, a new approach to topology optimization using the parameterized level set function and genetic algorithm optimization methods is presented. The impact of a number of parameters describing the level set function in the representation of the model was examined. Using the B-spline interpolation function, the number of variables describing the level set function was decreased, enabling the application of evolutionary methods (genetic algorithms) in the topology optimization process. The traditional level set method is performed by using the Hamilton–Jacobi transport equation, which implies the use of gradient optimization methods that are prone to becoming stuck in local minima. Furthermore, the resulting optimal shapes are strongly dependent on the initial solution. The proposed topology optimization procedure, written in MATLAB R2013b, utilizes a genetic algorithm for global optimization, enabling it to locate the global optimum efficiently. To assess the acceleration and convergence capabilities of the proposed topology optimization method, a new genetic algorithm penalty operator was tested. This operator addresses the slow convergence issue typically encountered when the genetic algorithm optimization procedure nears a solution. By penalizing similar individuals within a population, the method aims to enhance convergence speed and overall performance. In complex examples (3D), the method can also function as a generator of good initial solutions for faster topology optimization methods (e.g., level set) that rely on such initial solutions. Both the proposed method and the traditional methods have their own advantages and limitations. The main advantage is that the proposed method is a global search method. This makes it robust against entrapment in local minima and independent of the initial solution. It is important to note that this evolutionary approach does not necessarily perform better in terms of convergence speed compared to gradient-based or other local optimization methods. However, once the global optimum has been found using the genetic algorithm, convergence can be accelerated using a faster local method such as gradient-based optimization. The application and usefulness of the method were tested on typical 2D cantilever beams and Michell beams.
Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
►▼
Show Figures

Figure 1
Open AccessReview
Optimization of Rock-Cutting Tools: Improvements in Structural Design and Process Efficiency
by
Yuecao Cao, Qiang Zhang, Shucheng Zhang, Ying Tian, Xiangwei Dong, Xiaojun Song and Dongxiang Wang
Computation 2025, 13(7), 152; https://doi.org/10.3390/computation13070152 - 23 Jun 2025
Abstract
Rock-breaking cutters are critical components in tunneling, mining, and drilling operations, where efficiency, durability, and energy consumption are paramount. Traditional cutter designs and empirical process optimization methods often fail to address the dynamic interaction between heterogeneous rock masses and tool structures, leading to
[...] Read more.
Rock-breaking cutters are critical components in tunneling, mining, and drilling operations, where efficiency, durability, and energy consumption are paramount. Traditional cutter designs and empirical process optimization methods often fail to address the dynamic interaction between heterogeneous rock masses and tool structures, leading to premature wear, high specific energy, and suboptimal performance. Topology optimization, as an advanced computational design method, offers transformative potential for lightweight, high-strength cutter structures and adaptive cutting process control. This review systematically examines recent advancements in topology-optimized cutter design and its integration with rock-cutting mechanics. The structural innovations in cutter geometry and materials are analyzed, emphasizing solutions for stress distribution, wear/fatigue resistance, and dynamic load adaptation. The numerical methods for modeling rock–tool interactions are introduced, including discrete element method (DEM) simulations, smoothed particle hydrodynamics (SPH) methods, and machine learning (ML)-enhanced predictive models. The cutting process optimization strategies that leverage topology optimization to balance objectives such as energy efficiency, chip formation control, and tool lifespan are evaluated.
Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
English-Arabic Hybrid Semantic Text Chunking Based on Fine-Tuning BERT
by
Mai Alammar, Khalil El Hindi and Hend Al-Khalifa
Computation 2025, 13(6), 151; https://doi.org/10.3390/computation13060151 - 16 Jun 2025
Abstract
Semantic text chunking refers to segmenting text into coherently semantic chunks, i.e., into sets of statements that are semantically related. Semantic chunking is an essential pre-processing step in various NLP tasks e.g., document summarization, sentiment analysis and question answering. In this paper, we
[...] Read more.
Semantic text chunking refers to segmenting text into coherently semantic chunks, i.e., into sets of statements that are semantically related. Semantic chunking is an essential pre-processing step in various NLP tasks e.g., document summarization, sentiment analysis and question answering. In this paper, we propose a hybrid chunking; two-steps semantic text chunking method that combines the effectiveness of unsupervised semantic text chunking based on the similarities between sentences embeddings and the pre-trained language models (PLMs) especially BERT by fine-tuning the BERT on semantic textual similarity task (STS) to provide a flexible and effective semantic text chunking. We evaluated the proposed method in English and Arabic. To the best of our knowledge, there is an absence of an Arabic dataset created to assess semantic text chunking at this level. Therefore, we created an AraWiki50k to evaluate our proposed text chunking method inspired by an existing English dataset. Our experiments showed that exploiting the fine-tuned pre-trained BERT on STS enhances results over unsupervised semantic chunking by an average of 7.4 in the PK metric and by an average of 11.19 in the WindowDiff metric on four English evaluation datasets, and 0.12 in the PK and 2.29 in the WindowDiff for the Arabic dataset.
Full article
(This article belongs to the Section Computational Social Science)
►▼
Show Figures

Figure 1
Open AccessArticle
Computation of Transient and Steady-State Characteristics of Queueing Systems with Different Types of Customer
by
Alexander Zeifman, Yacov Satin, Ilia Usov and Janos Sztrik
Computation 2025, 13(6), 150; https://doi.org/10.3390/computation13060150 - 14 Jun 2025
Abstract
This paper deals with queueing models, in which the number of customers is described by a (inhomogeneous, in general) birth–death process. Depending on the choice of the type of intensities for the arrival and service of customers, the system can either have impatience
[...] Read more.
This paper deals with queueing models, in which the number of customers is described by a (inhomogeneous, in general) birth–death process. Depending on the choice of the type of intensities for the arrival and service of customers, the system can either have impatience (in which, as the queue length increases, the intensities of arrival decrease and the intensities of service increases) or attraction (in which, on the contrary, as the queue length increases, the intensities of the arrival of customers increase and service intensities decrease). In this article, various types of such models are considered, and their transient and limiting characteristics are computed. Furthermore, the rate of convergence and related bounds are also dealt with. Several numerical examples illustrate the proposed procedures.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computation, Entropy, J. Imaging, Optics
Color Image Processing: Models and Methods (CIP: MM)
Topic Editors: Giuliana Ramella, Isabella TorcicolloDeadline: 30 July 2025
Topic in
Algorithms, Computation, Mathematics, Molecules, Symmetry, Nanomaterials, Materials
Advances in Computational Materials Sciences
Topic Editors: Cuiying Jian, Aleksander CzekanskiDeadline: 30 September 2025
Topic in
AppliedMath, Axioms, Computation, Mathematics, Symmetry
A Real-World Application of Chaos Theory
Topic Editors: Adil Jhangeer, Mudassar ImranDeadline: 28 February 2026
Topic in
Axioms, Computation, Fractal Fract, Mathematics, Symmetry
Fractional Calculus: Theory and Applications, 2nd Edition
Topic Editors: António Lopes, Liping Chen, Sergio Adriani David, Alireza AlfiDeadline: 30 May 2026

Conferences
Special Issues
Special Issue in
Computation
Application of Biomechanical Modeling and Simulation
Guest Editor: Luis Pastor Sánchez-FernándezDeadline: 31 July 2025
Special Issue in
Computation
Computational Approaches for Manufacturing
Guest Editor: Lichao FangDeadline: 30 September 2025
Special Issue in
Computation
Applications of Intelligent Computing and Modeling in Construction Engineering
Guest Editors: Jerzy Rosłon, Michał Podolski, Bartłomiej SrokaDeadline: 30 September 2025
Special Issue in
Computation
Feature Papers in Computational Chemistry
Guest Editors: Alexander Novikov, Felipe FantuzziDeadline: 30 September 2025