Journal Description
Computation
Computation
is a peer-reviewed journal of computational science and engineering published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), CAPlus / SciFinder, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q2 (Mathematics, Interdisciplinary Applications) / CiteScore - Q1 (Applied Mathematics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.7 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
1.9 (2024);
5-Year Impact Factor:
1.9 (2024)
Latest Articles
Enhancing DDoS Attacks Mitigation Using Machine Learning and Blockchain-Based Mobile Edge Computing in IoT
Computation 2025, 13(7), 158; https://doi.org/10.3390/computation13070158 - 1 Jul 2025
Abstract
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures.
[...] Read more.
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. In this paper, we present a novel approach to DDoS attack detection and mitigation that integrates state-of-the-art machine learning techniques with Blockchain-based Mobile Edge Computing (MEC) in IoT environments. Our solution leverages the decentralized and tamper-resistant nature of Blockchain technology to enable secure and efficient data collection and processing at the network edge. We evaluate multiple machine learning models, including K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Transformer architectures, and LightGBM, using the CICDDoS2019 dataset. Our results demonstrate that Transformer models achieve a superior detection accuracy of 99.78%, while RF follows closely with 99.62%, and LightGBM offers optimal efficiency for real-time detection. This integrated approach significantly enhances detection accuracy and mitigation effectiveness compared to existing methods, providing a robust and adaptive mechanism for identifying and mitigating malicious traffic patterns in IoT environments.
Full article
(This article belongs to the Section Computational Engineering)
Open AccessArticle
Numerical Modeling of Electromagnetic Modes in a Planar Stratified Medium with a Graphene Interface
by
Eugen Smolkin
Computation 2025, 13(7), 157; https://doi.org/10.3390/computation13070157 - 1 Jul 2025
Abstract
Graphene interfaces in layered dielectrics can support unique electromagnetic modes, but analyzing these modes requires robust computational techniques. This work presents a numerical method for computing TE-polarized eigenmodes in a planar stratified dielectric slab with an infinitesimally thin graphene sheet at its interface.
[...] Read more.
Graphene interfaces in layered dielectrics can support unique electromagnetic modes, but analyzing these modes requires robust computational techniques. This work presents a numerical method for computing TE-polarized eigenmodes in a planar stratified dielectric slab with an infinitesimally thin graphene sheet at its interface. The governing boundary-value problem is reformulated as coupled initial-value problems and solved via a customized shooting method, enabling accurate calculation of complex propagation constants and field profiles despite the discontinuity at the graphene layer. We demonstrate that the graphene significantly alters the modal spectrum, introducing complex leaky and surface waves with attenuation due to graphene’s conductivity. Numerical results illustrate how the layers’ inhomogeneity and the graphene’s surface conductivity influence mode confinement and loss. These findings confirm the robustness of the proposed computational approach and provide insights relevant to the design and analysis of graphene-based waveguiding devices.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Feedback-Based Validation Learning
by
Chafik Boulealam, Hajar Filali, Jamal Riffi, Adnane Mohamed Mahraz and Hamid Tairi
Computation 2025, 13(7), 156; https://doi.org/10.3390/computation13070156 - 1 Jul 2025
Abstract
►▼
Show Figures
This paper presents Feedback-Based Validation Learning (FBVL), a novel approach that transforms the role of validation datasets in deep learning. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training process. It employs real-time feedback
[...] Read more.
This paper presents Feedback-Based Validation Learning (FBVL), a novel approach that transforms the role of validation datasets in deep learning. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training process. It employs real-time feedback to optimize the model’s weight adjustments, enhancing prediction accuracy and overall model performance. Importantly, FBVL preserves the integrity of the validation process by using prediction outcomes on the validation dataset to guide training adjustments, without directly accessing the dataset. Our empirical study conducted using the Iris dataset demonstrated the effectiveness of FBVL. The Iris dataset, comprising 150 samples from three species of Iris flowers, each characterized by four features, served as an ideal testbed for demonstrating FBVL’s effectiveness. The implementation of FBVL led to substantial performance improvements, surpassing the accuracy of the previous best result by approximately 7.14% and achieving a loss reduction greater than the previous methods by approximately 49.18%. When FBVL was applied to the Multimodal EmotionLines Dataset (MELD), it showcased its wide applicability across various datasets and domains. The model achieved a test-set accuracy of 70.08%, surpassing the previous best-reported accuracy by approximately 3.12%. These remarkable results underscore FBVL’s ability to optimize performance on established datasets and its capacity to minimize loss. Using our FBVL method, we achieved a test set f1_score micro of 70.07%, which is higher than the previous best-reported value for f1_score micro of 67.59%. These results demonstrate that FBVL enhances classification accuracy and model generalization, particularly in scenarios involving small or imbalanced datasets, offering practical benefits for designing more efficient and robust neural network architectures.
Full article

Figure 1
Open AccessArticle
A Mesoscale Particle Method for Simulation of Boundary Slip Phenomena in Fluid Systems
by
Alexander E. Filippov, Mikhail Popov and Valentin L. Popov
Computation 2025, 13(7), 155; https://doi.org/10.3390/computation13070155 - 1 Jul 2025
Abstract
The present work aimed to develop a simple simulation tool to support studies of slip and other non-traditional boundary conditions in solid–fluid interactions. A mesoscale particle model (movable automata) was chosen to enable performant simulation of all relevant aspects of the system, including
[...] Read more.
The present work aimed to develop a simple simulation tool to support studies of slip and other non-traditional boundary conditions in solid–fluid interactions. A mesoscale particle model (movable automata) was chosen to enable performant simulation of all relevant aspects of the system, including phase changes, plastic deformation and flow, interface phenomena, turbulence, etc. The physical system under study comprised two atomically flat surfaces composed of particles of different sizes and separated by a model fluid formed by moving particles with repulsing cores of different sizes and long-range attraction. The resulting simulation method was tested under a variety of particle densities and conditions. It was shown that the particles can enter different (solid, liquid, and gaseous) states, depending on the effective temperature (kinetic energy caused by surface motion and random noise generated by spatially distributed Langevin sources). The local order parameter and formation of solid domains was studied for systems with varying density. Heating of the region close to one of the plates could change the density of the liquid in its proximity and resulted in chaotization (turbulence); it also dramatically changed the system configuration, the direction of the average flow, and reduced the effective friction force.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Algorithmic Analysis of Queuing System with Varying Number of Servers, Phase-Type Service Time Distribution, and Changeable Arrival Process Depending on Random Environment
by
Alexander Dudin, Olga Dudina and Sergei Dudin
Computation 2025, 13(7), 154; https://doi.org/10.3390/computation13070154 - 29 Jun 2025
Abstract
►▼
Show Figures
An -type queuing system functioning within a finite-state Markovian random environment is studied. The random environment’s state impacts the number of available servers, the underlying processes of customer arrivals and service, and the impatience rate
[...] Read more.
An -type queuing system functioning within a finite-state Markovian random environment is studied. The random environment’s state impacts the number of available servers, the underlying processes of customer arrivals and service, and the impatience rate of customers. The impact on the state space of the underlying processes of customer arrivals and of the more general, as compared to exponential, service time distribution defines the novelty of the model. The behavior of the system is described by a multidimensional Markov chain that belongs to the classes of the level-independent quasi-birth-and-death processes or asymptotically quasi-Toeplitz Markov chains, depending on whether or not the customers are absolutely patient in all states of the random environment or are impatient in at least one state of the random environment. Using the tools of the corresponding processes or chains, a stationary analysis of the system is implemented. In particular, it is shown that the system is always ergodic if customers are impatient in at least one state of the random environment. Expressions for the computation of the basic performance measures of the system are presented. Examples of their computation for the system with three states of the random environment are presented as 3-D surfaces. The results can be useful for the analysis of a variety of real-world systems with parameters that may randomly change during system operation. In particular, they can be used for optimally matching the number of active servers and the bandwidth used by the transmission channels to the current rate of arrivals, and vice versa.
Full article

Figure 1
Open AccessArticle
A New Approach to Topology Optimization with Genetic Algorithm and Parameterization Level Set Function
by
Igor Pehnec, Damir Sedlar, Ivo Marinic-Kragic and Damir Vučina
Computation 2025, 13(7), 153; https://doi.org/10.3390/computation13070153 - 26 Jun 2025
Abstract
In this paper, a new approach to topology optimization using the parameterized level set function and genetic algorithm optimization methods is presented. The impact of a number of parameters describing the level set function in the representation of the model was examined. Using
[...] Read more.
In this paper, a new approach to topology optimization using the parameterized level set function and genetic algorithm optimization methods is presented. The impact of a number of parameters describing the level set function in the representation of the model was examined. Using the B-spline interpolation function, the number of variables describing the level set function was decreased, enabling the application of evolutionary methods (genetic algorithms) in the topology optimization process. The traditional level set method is performed by using the Hamilton–Jacobi transport equation, which implies the use of gradient optimization methods that are prone to becoming stuck in local minima. Furthermore, the resulting optimal shapes are strongly dependent on the initial solution. The proposed topology optimization procedure, written in MATLAB R2013b, utilizes a genetic algorithm for global optimization, enabling it to locate the global optimum efficiently. To assess the acceleration and convergence capabilities of the proposed topology optimization method, a new genetic algorithm penalty operator was tested. This operator addresses the slow convergence issue typically encountered when the genetic algorithm optimization procedure nears a solution. By penalizing similar individuals within a population, the method aims to enhance convergence speed and overall performance. In complex examples (3D), the method can also function as a generator of good initial solutions for faster topology optimization methods (e.g., level set) that rely on such initial solutions. Both the proposed method and the traditional methods have their own advantages and limitations. The main advantage is that the proposed method is a global search method. This makes it robust against entrapment in local minima and independent of the initial solution. It is important to note that this evolutionary approach does not necessarily perform better in terms of convergence speed compared to gradient-based or other local optimization methods. However, once the global optimum has been found using the genetic algorithm, convergence can be accelerated using a faster local method such as gradient-based optimization. The application and usefulness of the method were tested on typical 2D cantilever beams and Michell beams.
Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
►▼
Show Figures

Figure 1
Open AccessReview
Optimization of Rock-Cutting Tools: Improvements in Structural Design and Process Efficiency
by
Yuecao Cao, Qiang Zhang, Shucheng Zhang, Ying Tian, Xiangwei Dong, Xiaojun Song and Dongxiang Wang
Computation 2025, 13(7), 152; https://doi.org/10.3390/computation13070152 - 23 Jun 2025
Abstract
Rock-breaking cutters are critical components in tunneling, mining, and drilling operations, where efficiency, durability, and energy consumption are paramount. Traditional cutter designs and empirical process optimization methods often fail to address the dynamic interaction between heterogeneous rock masses and tool structures, leading to
[...] Read more.
Rock-breaking cutters are critical components in tunneling, mining, and drilling operations, where efficiency, durability, and energy consumption are paramount. Traditional cutter designs and empirical process optimization methods often fail to address the dynamic interaction between heterogeneous rock masses and tool structures, leading to premature wear, high specific energy, and suboptimal performance. Topology optimization, as an advanced computational design method, offers transformative potential for lightweight, high-strength cutter structures and adaptive cutting process control. This review systematically examines recent advancements in topology-optimized cutter design and its integration with rock-cutting mechanics. The structural innovations in cutter geometry and materials are analyzed, emphasizing solutions for stress distribution, wear/fatigue resistance, and dynamic load adaptation. The numerical methods for modeling rock–tool interactions are introduced, including discrete element method (DEM) simulations, smoothed particle hydrodynamics (SPH) methods, and machine learning (ML)-enhanced predictive models. The cutting process optimization strategies that leverage topology optimization to balance objectives such as energy efficiency, chip formation control, and tool lifespan are evaluated.
Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
English-Arabic Hybrid Semantic Text Chunking Based on Fine-Tuning BERT
by
Mai Alammar, Khalil El Hindi and Hend Al-Khalifa
Computation 2025, 13(6), 151; https://doi.org/10.3390/computation13060151 - 16 Jun 2025
Abstract
Semantic text chunking refers to segmenting text into coherently semantic chunks, i.e., into sets of statements that are semantically related. Semantic chunking is an essential pre-processing step in various NLP tasks e.g., document summarization, sentiment analysis and question answering. In this paper, we
[...] Read more.
Semantic text chunking refers to segmenting text into coherently semantic chunks, i.e., into sets of statements that are semantically related. Semantic chunking is an essential pre-processing step in various NLP tasks e.g., document summarization, sentiment analysis and question answering. In this paper, we propose a hybrid chunking; two-steps semantic text chunking method that combines the effectiveness of unsupervised semantic text chunking based on the similarities between sentences embeddings and the pre-trained language models (PLMs) especially BERT by fine-tuning the BERT on semantic textual similarity task (STS) to provide a flexible and effective semantic text chunking. We evaluated the proposed method in English and Arabic. To the best of our knowledge, there is an absence of an Arabic dataset created to assess semantic text chunking at this level. Therefore, we created an AraWiki50k to evaluate our proposed text chunking method inspired by an existing English dataset. Our experiments showed that exploiting the fine-tuned pre-trained BERT on STS enhances results over unsupervised semantic chunking by an average of 7.4 in the PK metric and by an average of 11.19 in the WindowDiff metric on four English evaluation datasets, and 0.12 in the PK and 2.29 in the WindowDiff for the Arabic dataset.
Full article
(This article belongs to the Section Computational Social Science)
►▼
Show Figures

Figure 1
Open AccessArticle
Computation of Transient and Steady-State Characteristics of Queueing Systems with Different Types of Customer
by
Alexander Zeifman, Yacov Satin, Ilia Usov and Janos Sztrik
Computation 2025, 13(6), 150; https://doi.org/10.3390/computation13060150 - 14 Jun 2025
Abstract
This paper deals with queueing models, in which the number of customers is described by a (inhomogeneous, in general) birth–death process. Depending on the choice of the type of intensities for the arrival and service of customers, the system can either have impatience
[...] Read more.
This paper deals with queueing models, in which the number of customers is described by a (inhomogeneous, in general) birth–death process. Depending on the choice of the type of intensities for the arrival and service of customers, the system can either have impatience (in which, as the queue length increases, the intensities of arrival decrease and the intensities of service increases) or attraction (in which, on the contrary, as the queue length increases, the intensities of the arrival of customers increase and service intensities decrease). In this article, various types of such models are considered, and their transient and limiting characteristics are computed. Furthermore, the rate of convergence and related bounds are also dealt with. Several numerical examples illustrate the proposed procedures.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Application of AI in Date Fruit Detection—Performance Analysis of YOLO and Faster R-CNN Models
by
Seweryn Lipiński, Szymon Sadkowski and Paweł Chwietczuk
Computation 2025, 13(6), 149; https://doi.org/10.3390/computation13060149 - 13 Jun 2025
Abstract
Presented study evaluates and compares two deep learning models, i.e., YOLOv8n and Faster R-CNN, for automated detection of date fruits in natural orchard environments. Both models were trained and tested using a publicly available annotated dataset. YOLO, a single-stage detector, achieved a mAP@0.5
[...] Read more.
Presented study evaluates and compares two deep learning models, i.e., YOLOv8n and Faster R-CNN, for automated detection of date fruits in natural orchard environments. Both models were trained and tested using a publicly available annotated dataset. YOLO, a single-stage detector, achieved a mAP@0.5 of 0.942 with a training time of approximately 2 h. It demonstrated strong generalization, especially in simpler conditions, and is well-suited for real-time applications due to its speed and lower computational requirements. Faster R-CNN, a two-stage detector using a ResNet-50 backbone, reached comparable accuracy (mAP@0.5 = 0.94) with slightly higher precision and recall. However, its training required significantly more time (approximately 19 h) and resources. Deep learning metrics analysis confirmed both models performed reliably, with YOLO favoring inference speed and Faster R-CNN offering improved robustness under occlusion and variable lighting. Practical recommendations are provided for model selection based on application needs—YOLO for mobile or field robotics and Faster R-CNN for high-accuracy offline tasks. Additional conclusions highlight the benefits of GPU acceleration and high-resolution inputs. The study contributes to the growing body of research on AI deployment in precision agriculture and provides insights into the development of intelligent harvesting and crop monitoring systems.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Effective Heuristics for Solving the Multi-Item Uncapacitated Lot-Sizing Problem Under Near-Minimal Storage Capacities
by
Warut Boonphakdee, Duangrat Hirunyasiri and Peerayuth Charnsethikul
Computation 2025, 13(6), 148; https://doi.org/10.3390/computation13060148 - 13 Jun 2025
Abstract
In inventory management, storage capacity constraints complicate multi-item lot-sizing decisions. As the number of items increases, deciding how much of each item to order without exceeding capacity becomes more difficult. Dynamic programming works efficiently for a single item, but when capacity constraints are
[...] Read more.
In inventory management, storage capacity constraints complicate multi-item lot-sizing decisions. As the number of items increases, deciding how much of each item to order without exceeding capacity becomes more difficult. Dynamic programming works efficiently for a single item, but when capacity constraints are nearly minimal across multiple items, novel heuristics are required. However, previous heuristics have mainly focused on inventory bound constraints. Therefore, this paper introduces push and pull heuristics to solve the multi-item uncapacitated lot-sizing problem under near-minimal capacities. First, a dynamic programming approach based on a network flow model was used to generate the initial replenishment plan for the single-item lot-sizing problem. Next, under storage capacity constraints, the push operation moved the selected replenishment quantities from the current period to subsequent periods to meet all demand requirements. Finally, the pull operation shifted the selected replenishment quantities from the current period into earlier periods, ensuring that all demand requirements were satisfied. The results of the random experiment showed that the proposed heuristic generated solutions whose performance compared well with the optimal solution. This heuristic effectively solves all randomly generated instances representing worst-case conditions, ensuring robust operation under near-minimal storage. For large-scale problems under near-minimal storage capacity constraints, the proposed heuristic achieved only small optimality gaps while requiring less running time. However, small- and medium-scale problems can be solved optimally by a Mixed-Integer Programming (MIP) solver with minimal running time.
Full article
(This article belongs to the Special Issue Modern Applications for Computational Methods in Applied Economics and Business Engineering)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Advanced Deep Learning Framework for Predicting the Remaining Useful Life of Nissan Leaf Generation 01 Lithium-Ion Battery Modules
by
Shamaltha M. Wickramaarachchi, S. A. Dewmini Suraweera, D. M. Pasindu Akalanka, V. Logeeshan and Chathura Wanigasekara
Computation 2025, 13(6), 147; https://doi.org/10.3390/computation13060147 - 12 Jun 2025
Abstract
The accurate estimation of the remaining useful life (RUL) of lithium-ion batteries (LIBs) is essential for ensuring safety and enabling effective battery health management systems. To address this challenge, data-driven solutions leveraging advanced machine learning and deep learning techniques have been developed. This
[...] Read more.
The accurate estimation of the remaining useful life (RUL) of lithium-ion batteries (LIBs) is essential for ensuring safety and enabling effective battery health management systems. To address this challenge, data-driven solutions leveraging advanced machine learning and deep learning techniques have been developed. This study introduces a novel framework, Deep Neural Networks with Memory Features (DNNwMF), for predicting the RUL of LIBs. The integration of memory features significantly enhances the model’s accuracy, and an autoencoder is incorporated to optimize the feature representation. The focus of this work is on feature engineering and uncovering hidden patterns in the data. The proposed model was trained and tested using lithium-ion battery cycle life datasets from NASA’s Prognostic Centre of Excellence and CALCE Lab. The optimized framework achieved an impressive RMSE of 6.61%, and with suitable modifications, the DNN model demonstrated a prediction accuracy of 92.11% for test data, which was used to estimate the RUL of Nissan Leaf Gen 01 battery modules.
Full article
(This article belongs to the Special Issue Nonlinear System Modelling and Control)
►▼
Show Figures

Figure 1
Open AccessArticle
Integration of Distributed Energy Resources in Unbalanced Networks Using a Generalized Normal Distribution Optimizer
by
Laura Sofía Avellaneda-Gómez, Brandon Cortés-Caicedo, Oscar Danilo Montoya and Jesús M. López-Lezama
Computation 2025, 13(6), 146; https://doi.org/10.3390/computation13060146 - 12 Jun 2025
Abstract
►▼
Show Figures
This article proposes an optimization methodology to address the joint placement as well as the capacity design of PV units and D-STATCOMs within unbalanced three-phase distribution systems. The proposed model adopts a mixed-integer nonlinear programming structure using complex-valued variables, with the objective of
[...] Read more.
This article proposes an optimization methodology to address the joint placement as well as the capacity design of PV units and D-STATCOMs within unbalanced three-phase distribution systems. The proposed model adopts a mixed-integer nonlinear programming structure using complex-valued variables, with the objective of minimizing the total annual cost—including investment, maintenance, and energy purchases. A leader–followeroptimization framework is adopted, where the leader stage utilizes the Generalized Normal Distribution Optimization (GNDO) algorithm to generate candidate solutions, while the follower stage conducts power flow calculations through successive approximation to assess the objective value. The proposed approach is tested on 25- and 37-node feeders and benchmarked against three widely used metaheuristic algorithms: the Chu and Beasley Genetic Algorithm, Particle Swarm Optimization, and Vortex Search Algorithm. The results indicate that the proposed strategy consistently achieves highly cost-efficient outcomes. For the 25-node system, the cost is reduced from USD 2,715,619.98 to USD 2,221,831.66 (18.18%), and for the 37-node system, from USD 2,927,715.61 to USD 2,385,465.29 (18.52%). GNDO also surpassed the alternative algorithms in terms of solution precision, robustness, and statistical dispersion across 100 runs. All numerical simulations were executed using MATLAB R2024a. These findings confirm the scalability and reliability of the proposed method, positioning it as an effective tool for planning distributed energy integration in practical unbalanced networks.
Full article

Figure 1
Open AccessArticle
Implementing Astronomical Potential and Wavelet Analysis to Improve Regional Tide Modeling
by
Jihene Abdennadher and Moncef Boukthir
Computation 2025, 13(6), 145; https://doi.org/10.3390/computation13060145 - 11 Jun 2025
Abstract
This study aimed to accurately simulate the main tidal characteristics in a regional domain featuring four open boundaries, with a primary focus on baroclinic tides. Such understanding is crucial for improving the representation of oceanic energy transfer and mixing processes in numerical models.
[...] Read more.
This study aimed to accurately simulate the main tidal characteristics in a regional domain featuring four open boundaries, with a primary focus on baroclinic tides. Such understanding is crucial for improving the representation of oceanic energy transfer and mixing processes in numerical models. To this end, the astronomical potential, load tide effects, and a wavelet-based analysis method were implemented in the three-dimensional ROMS model. The inclusion of the astronomical tidal and load tide aimed to enhance the accuracy of tidal simulations, while the wavelet method was employed to analyze the generation and propagation of internal tides from their source regions and to characterize their main features. Twin simulations with and without astronomical potential forcing were conducted to evaluate its influence on tidal elevations and currents. Model performance was assessed through comparison with tide gauge observations. Incorporating the potential forcing improves simulation accuracy, as the model fields successfully reproduced the main features of the barotropic tide and showed good agreement with observed amplitude and phase data. A complex principal component analysis was then applied to a matrix of normalized wavelet coefficients derived from the enhanced model outputs, enabling the characterization of horizontal modal propagation and vertical mode decomposition of both and nonlinear internal tides.
Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
►▼
Show Figures

Figure 1
Open AccessArticle
Scalable Clustering of Complex ECG Health Data: Big Data Clustering Analysis with UMAP and HDBSCAN
by
Vladislav Kaverinskiy, Illya Chaikovsky, Anton Mnevets, Tatiana Ryzhenko, Mykhailo Bocharov and Kyrylo Malakhov
Computation 2025, 13(6), 144; https://doi.org/10.3390/computation13060144 - 10 Jun 2025
Abstract
This study explores the potential of unsupervised machine learning algorithms to identify latent cardiac risk profiles by analyzing ECG-derived parameters from two general groups: clinically healthy individuals (Norm dataset, n = 14,863) and patients hospitalized with heart failure (patients’ dataset, n = 8220).
[...] Read more.
This study explores the potential of unsupervised machine learning algorithms to identify latent cardiac risk profiles by analyzing ECG-derived parameters from two general groups: clinically healthy individuals (Norm dataset, n = 14,863) and patients hospitalized with heart failure (patients’ dataset, n = 8220). Each dataset includes 153 ECG and heart rate variability (HRV) features, including both conventional and novel diagnostic parameters obtained using a Universal Scoring System. The study aims to apply unsupervised clustering algorithms to ECG data to detect latent risk profiles related to heart failure, based on distinctive ECG features. The focus is on identifying patterns that correlate with cardiac health risks, potentially aiding in early detection and personalized care. We applied a combination of Uniform Manifold Approximation and Projection (UMAP) for dimensionality reduction and Hierarchical Density-Based Spatial Clustering (HDBSCAN) for unsupervised clustering. Models trained on one dataset were applied to the other to explore structural differences and detect latent predispositions to cardiac disorders. Both Euclidean and Manhattan distance metrics were evaluated. Features such as the QRS angle in the frontal plane, Detrended Fluctuation Analysis (DFA), High-Frequency power (HF), and others were analyzed for their ability to distinguish different patient clusters. In the Norm dataset, Euclidean distance clustering identified two main clusters, with Cluster 0 indicating a lower risk of heart failure. Key discriminative features included the “ALPHA QRS ANGLE IN THE FRONTAL PLANE” and DFA. In the patients’ dataset, three clusters emerged, with Cluster 1 identified as potentially high-risk. Manhattan distance clustering provided additional insights, highlighting features like “ST DISLOCATION” and “T AMP NORMALIZED” as significant for distinguishing between clusters. The analysis revealed distinct clusters that correspond to varying levels of heart failure risk. In the Norm dataset, two main clusters were identified, with one associated with a lower risk profile. In the patients’ dataset, a three-cluster structure emerged, with one subgroup displaying markedly elevated risk indicators such as high-frequency power (HF) and altered QRS angle values. Cross-dataset clustering confirmed consistent feature shifts between groups. These findings demonstrate the feasibility of ECG-based unsupervised clustering for early risk stratification. The results offer a non-invasive tool for personalized cardiac monitoring and merit further clinical validation. These findings emphasize the potential for clustering techniques to contribute to early heart failure detection and personalized monitoring. Future research should aim to validate these results in other populations and integrate these methods into clinical decision-making frameworks.
Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Precision-Driven Semantic Segmentation of Pipe Gallery Diseases Using PipeU-NetX: A Depthwise Separable Convolution Approach
by
Wenbin Song, Hanqian Wu and Chunlin Pu
Computation 2025, 13(6), 143; https://doi.org/10.3390/computation13060143 - 10 Jun 2025
Abstract
►▼
Show Figures
Aiming at the problems of high labor cost, low detection efficiency, and insufficient detection accuracy of traditional pipe gallery disease detection methods, this paper proposes a pipe gallery disease segmentation model, PipeU-NetX, based on deep learning technology. By introducing the innovative down-sampling module
[...] Read more.
Aiming at the problems of high labor cost, low detection efficiency, and insufficient detection accuracy of traditional pipe gallery disease detection methods, this paper proposes a pipe gallery disease segmentation model, PipeU-NetX, based on deep learning technology. By introducing the innovative down-sampling module MD-U, up-sampling module SC-U, and feature fusion module FFM, the model optimizes the feature extraction and fusion process, reduces the loss of feature information, and realizes the accurate segmentation of the pipe gallery disease image. In comparison with U-Net, FCN, and Deeplabv3+ models, PipeU-NetX achieved the best PA, MPA, FWIoU, and MIoU, which were 99.15%, 92.66%, 98.34%, and 87.63%, respectively. Compared with the benchmark model U-Net, the MIoU and MPA of the PipeU-NetX model increased by 4.64% and 3.92%, respectively, and the number of parameters decreased by 23.71%. The detection speed increased by 22.1%. The PipeU-NetX model proposed in this paper shows the powerful ability of multi-scale feature extraction and defect area adaptive recognition and provides an effective solution for the intelligent monitoring of the pipe gallery environment and accurate disease segmentation.
Full article

Figure 1
Open AccessArticle
Stability Analysis and Local Convergence of a New Fourth-Order Optimal Jarratt-Type Iterative Scheme
by
Eulalia Martínez, José A. Reyes, Alicia Cordero and Juan R. Torregrosa
Computation 2025, 13(6), 142; https://doi.org/10.3390/computation13060142 - 9 Jun 2025
Abstract
►▼
Show Figures
In this work, using the weight function technique, we introduce a new family of fourth-order iterative methods optimal in the sense of Kung and Traub for scalar equations, generalizing Jarratt’s method. Through Taylor series expansions, we confirm that all members of this family
[...] Read more.
In this work, using the weight function technique, we introduce a new family of fourth-order iterative methods optimal in the sense of Kung and Traub for scalar equations, generalizing Jarratt’s method. Through Taylor series expansions, we confirm that all members of this family achieve fourth-order convergence when derivatives up to the fourth order are bounded. Additionally, a stability analysis is performed on quadratic polynomials using complex discrete dynamics, enabling differentiation among the methods based on their stability. To demonstrate practical applicability, a numerical example illustrates the effectiveness of the proposed family. Extending our findings to Banach spaces, we conduct local convergence analyses on a specific subfamily containing Jarratt’s method, requiring only boundedness of the first derivative. This significantly broadens the method’s applicability to more general spaces and reduces constraints on higher-order derivatives. Finally, additional examples validate the existence and uniqueness of approximate solutions in Banach spaces, provided the initial estimate lies within the locally determined convergence radius obtained using majorizing functions.
Full article

Figure 1
Open AccessArticle
Sliding Window-Based Randomized K-Fold Dynamic ANN for Next-Day Stock Trend Forecasting
by
Jaykumar Ishvarbhai Prajapati and Raja Das
Computation 2025, 13(6), 141; https://doi.org/10.3390/computation13060141 - 8 Jun 2025
Abstract
The integration of machine learning and stock forecasting is attracting increased curiosity owing to its growing significance. This paper presents two main areas of study: predicting pattern trends for the next day and forecasting opening and closing prices using a new method that
[...] Read more.
The integration of machine learning and stock forecasting is attracting increased curiosity owing to its growing significance. This paper presents two main areas of study: predicting pattern trends for the next day and forecasting opening and closing prices using a new method that adds a dynamic hidden layer to artificial neural networks and employs a unique random k-fold cross-validation to enhance prediction accuracy and improve training. To validate the model, we are considering APPLE, GOOGLE, and AMAZON stock data. As a result, low root mean squared error (1.7208) and mean absolute error (0.9892) in both training and validation phases demonstrate the robust predictive performance of the dynamic ANN model. Furthermore, high R-values indicated a strong correlation between the experimental data and proposed model estimates.
Full article
(This article belongs to the Special Issue Modern Applications for Computational Methods in Applied Economics and Business Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Early Detection of Inter-Turn Short Circuits in Induction Motors Using the Derivative of Stator Current and a Lightweight 1D-ResNet
by
Carlos Javier Morales-Perez, David Camarena-Martinez, Juan Pablo Amezquita-Sanchez, Jose de Jesus Rangel-Magdaleno, Edwards Ernesto Sánchez Ramírez and Martin Valtierra-Rodriguez
Computation 2025, 13(6), 140; https://doi.org/10.3390/computation13060140 - 4 Jun 2025
Abstract
This work presents a lightweight and practical methodology for detecting inter-turn short-circuit faults in squirrel-cage induction motors under different mechanical load conditions. The proposed approach utilizes a one-dimensional convolutional neural network (1D-CNN) enhanced with residual blocks and trained on differentiated stator current signals
[...] Read more.
This work presents a lightweight and practical methodology for detecting inter-turn short-circuit faults in squirrel-cage induction motors under different mechanical load conditions. The proposed approach utilizes a one-dimensional convolutional neural network (1D-CNN) enhanced with residual blocks and trained on differentiated stator current signals obtained under different load mechanical conditions. This preprocessing step enhances fault-related features, enabling improved learning while maintaining the simplicity of a lightweight CNN. The model achieved classification accuracies above 99.16% across all folds in five-fold cross-validation and demonstrated the ability to detect faults involving as few as three short-circuited turns. Comparative experiments with the Multi-Scale 1D-ResNet demonstrate that the proposed method achieves similar or superior performance while significantly reducing training time. These results highlight the model’s suitability for real-time fault detection in embedded and resource-constrained industrial environments.
Full article
(This article belongs to the Special Issue Diagnosing Faults with Machine Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
Risk Assessment of Mud Cake on Shield Cutter Head Based on Modified Analytic Hierarchy Process
by
Wen Cao, Shoubao Xue, Yujia Xu, Huanyu Lin, Hui Li, Shengjun Deng, Lin Li and Yun Bai
Computation 2025, 13(6), 139; https://doi.org/10.3390/computation13060139 - 4 Jun 2025
Abstract
When the shield machines are constructed in soft soil, excavation may be impeded by the accumulation of cutter head mud. Geological conditions and shield construction are identified as the main factors for cutter head mud formation, based on analysis of its mechanism. In
[...] Read more.
When the shield machines are constructed in soft soil, excavation may be impeded by the accumulation of cutter head mud. Geological conditions and shield construction are identified as the main factors for cutter head mud formation, based on analysis of its mechanism. In addition to traditional metrics, the imperforation area in the cutter head center is incorporated into the analysis of shield construction factors. The Analytic Hierarchy Process (AHP) is utilized to establish a risk assessment model for shield cutter head mud cake, determining the weight of each sub-factor and enabling a preliminary risk assessment of mud cake occurrence. This study applies Analytic Hierarchy Process (AHP) to classify the factors affecting shield mud by using the Mawan cross-sea channel construction project (Moon Bay Avenue along the Yangtze River) as a case study. Each factor is scored and weighted according to established scoring criteria and evaluation formulas, and then the results of the risk of shield mud cake in the Mawan tunnel are obtained. Moreover, field observations validate the proposed risk model, with the derived risk index demonstrating strong alignment with actual data.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computation, Entropy, J. Imaging, Optics
Color Image Processing: Models and Methods (CIP: MM)
Topic Editors: Giuliana Ramella, Isabella TorcicolloDeadline: 30 July 2025
Topic in
Algorithms, Computation, Mathematics, Molecules, Symmetry, Nanomaterials, Materials
Advances in Computational Materials Sciences
Topic Editors: Cuiying Jian, Aleksander CzekanskiDeadline: 30 September 2025
Topic in
AppliedMath, Axioms, Computation, Mathematics, Symmetry
A Real-World Application of Chaos Theory
Topic Editors: Adil Jhangeer, Mudassar ImranDeadline: 28 February 2026
Topic in
Axioms, Computation, Fractal Fract, Mathematics, Symmetry
Fractional Calculus: Theory and Applications, 2nd Edition
Topic Editors: António Lopes, Liping Chen, Sergio Adriani David, Alireza AlfiDeadline: 30 May 2026

Conferences
Special Issues
Special Issue in
Computation
Computational Advances in Aerospace Engineering: Modeling, Simulation and Aerospace Systems Testing
Guest Editor: Andry SedelnikovDeadline: 1 July 2025
Special Issue in
Computation
Application of Biomechanical Modeling and Simulation
Guest Editor: Luis Pastor Sánchez-FernándezDeadline: 31 July 2025
Special Issue in
Computation
Feature Papers in Computational Chemistry
Guest Editors: Alexander Novikov, Felipe FantuzziDeadline: 30 September 2025
Special Issue in
Computation
Applications of Intelligent Computing and Modeling in Construction Engineering
Guest Editors: Jerzy Rosłon, Michał Podolski, Bartłomiej SrokaDeadline: 30 September 2025