Journal Description
Computation
Computation
is a peer-reviewed journal of computational science and engineering published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), CAPlus / SciFinder, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q2 (Mathematics, Interdisciplinary Applications) / CiteScore - Q2 (Applied Mathematics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 18.6 days after submission; acceptance to publication is undertaken in 4.2 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
1.9 (2023);
5-Year Impact Factor:
2.0 (2023)
Latest Articles
A Simplified Fish School Search Algorithm for Continuous Single-Objective Optimization
Computation 2025, 13(5), 102; https://doi.org/10.3390/computation13050102 - 25 Apr 2025
Abstract
►
Show Figures
The Fish School Search (FSS) algorithm is a metaheuristic known for its distinctive exploration and exploitation operators and cumulative success representation approach. Despite its success across various problem domains, the FSS presents issues due to its high number of parameters, making its performance
[...] Read more.
The Fish School Search (FSS) algorithm is a metaheuristic known for its distinctive exploration and exploitation operators and cumulative success representation approach. Despite its success across various problem domains, the FSS presents issues due to its high number of parameters, making its performance susceptible to improper parameterization. Additionally, the interplay between its operators requires a sequential execution in a specific order, requiring two fitness evaluations per iteration for each individual. This operator’s intricacy and the number of fitness evaluations pose the issue of costly fitness functions and inhibit parallelization. To address these challenges, this paper proposes a Simplified Fish School Search (SFSS) algorithm that preserves the core features of the original FSS while redesigning the fish movement operators and introducing a new turbulence mechanism to enhance population diversity and robustness against stagnation. The SFSS also reduces the number of fitness evaluations per iteration and minimizes the algorithm’s parameter set. Computational experiments were conducted using a benchmark suite from the CEC 2017 competition to compare the SFSS with the traditional FSS and five other well-known metaheuristics. The SFSS outperformed the FSS in 84% of the problems and achieved the best results among all algorithms in 10 of the 26 problems.
Full article
Open AccessArticle
Computational Analysis of Tandem Micro-Vortex Generators for Supersonic Boundary Layer Flow Control
by
Caixia Chen, Yong Yang and Yonghua Yan
Computation 2025, 13(4), 101; https://doi.org/10.3390/computation13040101 - 19 Apr 2025
Abstract
Micro-vortex generators (MVGs) are widely utilized as passive devices to control flow separation in supersonic boundary layers by generating ring-like vortices that mitigate shock-induced effects. This study employs large eddy simulation (LES) to investigate the flow structures in a supersonic boundary layer (Mach
[...] Read more.
Micro-vortex generators (MVGs) are widely utilized as passive devices to control flow separation in supersonic boundary layers by generating ring-like vortices that mitigate shock-induced effects. This study employs large eddy simulation (LES) to investigate the flow structures in a supersonic boundary layer (Mach 2.5, Re = 5760) controlled by two MVGs installed in tandem, with spacings varying from 11.75 h to 18.75 h (h = MVG height), alongside a single-MVG reference case. A fifth-order WENO scheme and third-order TVD Runge–Kutta method were used to solve the unfiltered Navier–Stokes equations, with the Liutex method applied to visualize vortex structures. Results reveal that tandem MVGs produce complex vortex interactions, with spanwise and streamwise vortices merging extensively, leading to a significant reduction in vortex intensity due to mutual cancellation. A momentum deficit forms behind the second MVG, weakening that from the first, while the boundary layer energy thickness doubles compared to the single-MVG case, indicating increased energy loss. Streamwise vorticity distributions and instantaneous streamlines highlight intensified interactions with closer spacings, yet this complexity diminishes overall flow control effectiveness. Contrary to expectations, the tandem configuration does not enhance boundary layer control but instead weakens it, as evidenced by reduced vortex strength and amplified energy dissipation. These findings underscore a critical trade-off in tandem MVG deployment, suggesting that while vortex interactions enrich flow complexity, they may compromise the intended control benefits in supersonic flows, with implications for optimizing MVG arrangements in practical applications.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
On the Generalized Inverse Gaussian Volatility in the Continuous Ho–Lee Model
by
Roman V. Ivanov
Computation 2025, 13(4), 100; https://doi.org/10.3390/computation13040100 - 19 Apr 2025
Abstract
This paper presents a new model of the term structure of interest rates that is based on the continuous Ho–Lee one. In this model, we suggest that the drift and volatility coefficients depend additionally on a generalized inverse Gaussian (GIG) distribution. Analytical expressions
[...] Read more.
This paper presents a new model of the term structure of interest rates that is based on the continuous Ho–Lee one. In this model, we suggest that the drift and volatility coefficients depend additionally on a generalized inverse Gaussian (GIG) distribution. Analytical expressions for the bond price and its moments are found in the new GIG continuous Ho–Lee model. Also, we compute in this model the prices of European call and put options written on bond. The obtained formulas are determined by the values of the Humbert confluent hypergeometric function of two variables. A numerical experiment shows that the third and fourth moments of the bond prices differentiate substantially in the continuous Ho–Lee and GIG continuous Ho–Lee models.
Full article
(This article belongs to the Section Computational Social Science)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhanced Efficient 3D Poisson Solver Supporting Dirichlet, Neumann, and Periodic Boundary Conditions
by
Chieh-Hsun Wu
Computation 2025, 13(4), 99; https://doi.org/10.3390/computation13040099 - 18 Apr 2025
Abstract
This paper generalizes the efficient matrix decomposition method for solving the finite-difference (FD) discretized three-dimensional (3D) Poisson’s equation using symmetric 27-point, 4th-order accurate stencils to adapt more boundary conditions (BCs), i.e., Dirichlet, Neumann, and Periodic BCs. It employs equivalent Dirichlet nodes to streamline
[...] Read more.
This paper generalizes the efficient matrix decomposition method for solving the finite-difference (FD) discretized three-dimensional (3D) Poisson’s equation using symmetric 27-point, 4th-order accurate stencils to adapt more boundary conditions (BCs), i.e., Dirichlet, Neumann, and Periodic BCs. It employs equivalent Dirichlet nodes to streamline source term computation due to BCs. A generalized eigenvalue formulation is presented to accommodate the flexible 4th-order stencil weights. The proposed method significantly enhances computational speed by reducing the 3D problem to a set of independent 1D problems. As compared to the typical matrix inversion technique, it results in a speed-up ratio proportional to , where is the number of nodes along one side of the cubic domain. Accuracy is validated using Gaussian and sinusoidal source fields, showing 4th-order convergence for Dirichlet and Periodic boundaries, and 2nd-order convergence for Neumann boundaries due to extrapolation limitations—though with lower errors than traditional 2nd-order schemes. The method is also applied to vortex-in-cell flow simulations, demonstrating its capability to handle outer boundaries efficiently and its compatibility with immersed boundary techniques for internal solid obstacles.
Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
►▼
Show Figures

Figure 1
Open AccessArticle
Blockchain-Enhanced Security for 5G Edge Computing in IoT
by
Manuel J. C. S. Reis
Computation 2025, 13(4), 98; https://doi.org/10.3390/computation13040098 - 18 Apr 2025
Abstract
►▼
Show Figures
The rapid expansion of 5G networks and edge computing has amplified security challenges in Internet of Things (IoT) environments, including unauthorized access, data tampering, and DDoS attacks. This paper introduces EdgeChainGuard, a hybrid blockchain-based authentication framework designed to secure 5G-enabled IoT systems through
[...] Read more.
The rapid expansion of 5G networks and edge computing has amplified security challenges in Internet of Things (IoT) environments, including unauthorized access, data tampering, and DDoS attacks. This paper introduces EdgeChainGuard, a hybrid blockchain-based authentication framework designed to secure 5G-enabled IoT systems through decentralized identity management, smart contract-based access control, and AI-driven anomaly detection. By combining permissioned and permissionless blockchain layers with Layer-2 scaling solutions and adaptive consensus mechanisms, the framework enhances both security and scalability while maintaining computational efficiency. Using synthetic datasets that simulate real-world adversarial behaviour, our evaluation shows an average authentication latency of 172.50 s and a 50% reduction in gas fees compared to traditional Ethereum-based implementations. The results demonstrate that EdgeChainGuard effectively enforces tamper-resistant authentication, reduces unauthorized access, and adapts to dynamic network conditions. Future research will focus on integrating zero-knowledge proofs (ZKPs) for privacy preservation, federated learning for decentralized AI retraining, and lightweight anomaly detection models to enable secure, low-latency authentication in resource-constrained IoT deployments.
Full article

Figure 1
Open AccessCommunication
Pareto Efficiency in Euclidean Spaces and Its Applications in Economics
by
Christos Kountzakis and Vasileia Tsachouridou-Papadatou
Computation 2025, 13(4), 97; https://doi.org/10.3390/computation13040097 - 14 Apr 2025
Abstract
The aim of the first part of this paper is to show whether a set of Proper Efficient Points and a set of Pareto Efficient Points coincide in Euclidean spaces. In the second part of the paper, we show that supporting prices, which
[...] Read more.
The aim of the first part of this paper is to show whether a set of Proper Efficient Points and a set of Pareto Efficient Points coincide in Euclidean spaces. In the second part of the paper, we show that supporting prices, which are actually strictly positive, do exist for a large class of exchange economies. A consequence of this result is a generalized form of the Second Welfare theorem. The properties of the cones’ bases are significant for this purpose.
Full article
Open AccessArticle
Deep Learning-Based Short Text Summarization: An Integrated BERT and Transformer Encoder–Decoder Approach
by
Fahd A. Ghanem, M. C. Padma, Hudhaifa M. Abdulwahab and Ramez Alkhatib
Computation 2025, 13(4), 96; https://doi.org/10.3390/computation13040096 - 12 Apr 2025
Abstract
The field of text summarization has evolved from basic extractive methods that identify key sentences to sophisticated abstractive techniques that generate contextually meaningful summaries. In today’s digital landscape, where an immense volume of textual data is produced every day, the need for concise
[...] Read more.
The field of text summarization has evolved from basic extractive methods that identify key sentences to sophisticated abstractive techniques that generate contextually meaningful summaries. In today’s digital landscape, where an immense volume of textual data is produced every day, the need for concise and coherent summaries is more crucial than ever. However, summarizing short texts, particularly from platforms like Twitter, presents unique challenges due to character constraints, informal language, and noise from elements such as hashtags, mentions, and URLs. To overcome these challenges, this paper introduces a deep learning framework for automated short text summarization on Twitter. The proposed approach combines bidirectional encoder representations from transformers (BERT) with a transformer-based encoder–decoder architecture (TEDA), incorporating an attention mechanism to improve contextual understanding. Additionally, long short-term memory (LSTM) networks are integrated within BERT to effectively capture long-range dependencies in tweets and their summaries. This hybrid model ensures that generated summaries remain informative, concise, and contextually relevant while minimizing redundancy. The performance of the proposed framework was assessed using three benchmark Twitter datasets—Hagupit, SHShoot, and Hyderabad Blast—with ROUGE scores serving as the evaluation metric. Experimental results demonstrate that the model surpasses existing approaches in accurately capturing key information from tweets. These findings underscore the framework’s effectiveness in automated short text summarization, offering a robust solution for efficiently processing and summarizing large-scale social media content.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Non-Iterative Recovery Information Procedure with Database Inspired in Hopfield Neural Networks
by
Cesar U. Solis, Jorge Morales and Carlos M. Montelongo
Computation 2025, 13(4), 95; https://doi.org/10.3390/computation13040095 - 10 Apr 2025
Abstract
This work establishes a simple algorithm to recover an information vector from a predefined database available every time. It is considered that the information analyzed may be incomplete, damaged, or corrupted. This algorithm is inspired by Hopfield Neural Networks (HNN), which allows the
[...] Read more.
This work establishes a simple algorithm to recover an information vector from a predefined database available every time. It is considered that the information analyzed may be incomplete, damaged, or corrupted. This algorithm is inspired by Hopfield Neural Networks (HNN), which allows the recursive reconstruction of an information vector through an energy-minimizing optimal process, but this paper presents a procedure that generates results in a single iteration. Images have been chosen for the information recovery application to build the vector information. In addition, a filter is added to the algorithm to focus on the most important information when reconstructing data, allowing it to work with damaged or incomplete vectors, even without losing the ability to be a non-iterative process. A brief theoretical introduction and a numerical validation for recovery information are shown with an example of a database containing 40 images.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Fault Diagnosis in Analog Circuits Using a Multi-Input Convolutional Neural Network with Feature Attention
by
Hui Yuan, Yaoke Shi, Long Li, Guobi Ling, Jingxiao Zeng and Zhiwen Wang
Computation 2025, 13(4), 94; https://doi.org/10.3390/computation13040094 - 9 Apr 2025
Abstract
Accurate fault diagnosis in analog circuits faces significant challenges owing to the inherent complexity of fault data patterns and the limited feature representation capabilities of conventional methodologies. Addressing the limitations of current convolutional neural networks (CNN) in handling heterogeneous fault characteristics, this study
[...] Read more.
Accurate fault diagnosis in analog circuits faces significant challenges owing to the inherent complexity of fault data patterns and the limited feature representation capabilities of conventional methodologies. Addressing the limitations of current convolutional neural networks (CNN) in handling heterogeneous fault characteristics, this study presents an efficient channel attention-enhanced multi-input CNN framework (ECA-MI-CNN) with dual-domain feature fusion, demonstrating three key innovations. First, the proposed framework addresses multi-domain feature extraction through parallel CNN branches specifically designed for processing time-domain and frequency-domain features, effectively preserving their distinct characteristic information. Second, the incorporation of an efficient channel attention (ECA) module between convolutional layers enables adaptive feature response recalibration, significantly enhancing discriminative feature learning while maintaining computational efficiency. Third, a hierarchical fusion strategy systematically integrates time-frequency domain features through concatenation and fully connected layer transformations prior to classification. Comprehensive simulation experiments conducted on Butterworth low-pass filters and two-stage quad op-amp dual second-order low-pass filters demonstrate the framework’s superior diagnostic capabilities. Real-world validation on Butterworth low-pass filters further reveals substantial performance advantages over existing methods, establishing an effective solution for complex fault pattern recognition in electronic systems.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Deep Multi-Component Neural Network Architecture
by
Chafik Boulealam, Hajar Filali, Jamal Riffi, Adnane Mohamed Mahraz and Hamid Tairi
Computation 2025, 13(4), 93; https://doi.org/10.3390/computation13040093 - 8 Apr 2025
Abstract
►▼
Show Figures
Existing neural network architectures often struggle with two critical limitations: (1) information loss during dataset length standardization, where variable-length samples are forced into fixed dimensions, and (2) inefficient feature selection in single-modal systems, which treats all features equally regardless of relevance. To address
[...] Read more.
Existing neural network architectures often struggle with two critical limitations: (1) information loss during dataset length standardization, where variable-length samples are forced into fixed dimensions, and (2) inefficient feature selection in single-modal systems, which treats all features equally regardless of relevance. To address these issues, this paper introduces the Deep Multi-Components Neural Network (DMCNN), a novel architecture that processes variable-length data by regrouping samples into components of similar lengths, thereby preserving information that traditional methods discard. DMCNN dynamically prioritizes task-relevant features through a component-weighting mechanism, which calculates the importance of each component via loss functions and adjusts weights using a SoftMax function. This approach eliminates the need for dataset standardization while enhancing meaningful features and suppressing irrelevant ones. Additionally, DMCNN seamlessly integrates multimodal data (e.g., text, speech, and signals) as separate components, leveraging complementary information to improve accuracy without requiring dimension alignment. Evaluated on the Multimodal EmotionLines Dataset (MELD) and CIFAR-10, DMCNN achieves state-of-the-art accuracy of 99.22% on MELD and 97.78% on CIFAR-10, outperforming existing methods like MNN and McDFR. The architecture’s efficiency is further demonstrated by its reduced trainable parameters and robust handling of multimodal and variable-length inputs, making it a versatile solution for classification tasks.
Full article

Figure 1
Open AccessArticle
Predicting Urban Traffic Congestion with VANET Data
by
Wilson Chango, Pamela Buñay, Juan Erazo, Pedro Aguilar, Jaime Sayago, Angel Flores and Geovanny Silva
Computation 2025, 13(4), 92; https://doi.org/10.3390/computation13040092 - 7 Apr 2025
Abstract
The purpose of this study lies in developing a comparison of neural network-based models for vehicular congestion prediction, with the aim of improving urban mobility and mitigating the negative effects associated with traffic, such as accidents and congestion. This research focuses on evaluating
[...] Read more.
The purpose of this study lies in developing a comparison of neural network-based models for vehicular congestion prediction, with the aim of improving urban mobility and mitigating the negative effects associated with traffic, such as accidents and congestion. This research focuses on evaluating the effectiveness of different neural network architectures, specifically Transformer and LSTM, in order to achieve accurate and reliable predictions of vehicular congestion. To carry out this research, a rigorous methodology was employed that included a systematic literature review based on the PRISMA methodology, which allowed for the identification and synthesis of the most relevant advances in the field. Likewise, the Design Science Research (DSR) methodology was applied to guide the development and validation of the models, and the CRISP-DM (Cross-Industry Standard Process for Data Mining) methodology was used to structure the process, from understanding the problem to implementing the solutions. The dataset used in this study included key variables related to traffic, such as vehicle speed, vehicular flow, and weather conditions. These variables were processed and normalized to train and evaluate various neural network architectures, highlighting LSTM and Transformer networks. The results obtained demonstrated that the LSTM-based model outperformed the Transformer model in the task of congestion prediction. Specifically, the LSTM model achieved an accuracy of 0.9463, with additional metrics such as a loss of 0.21, an accuracy of 0.93, a precision of 0.29, a recall of 0.71, an F1-score of 0.42, an MSE of 0.07, and an RMSE of 0.26. In conclusion, this study demonstrates that the LSTM-based model is highly effective for predicting vehicular congestion, surpassing other architectures such as Transformer. The integration of this model into a simulation environment showed that real-time traffic information can significantly improve urban mobility management. These findings support the utility of neural network architectures in sustainable urban planning and intelligent traffic management, opening new perspectives for future research in this field.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Eye Care: Predicting Eye Diseases Using Deep Learning Based on Retinal Images
by
Araek Tashkandi
Computation 2025, 13(4), 91; https://doi.org/10.3390/computation13040091 - 3 Apr 2025
Abstract
Eye illness detection is important, yet it can be difficult and error-prone. In order to effectively and promptly diagnose eye problems, doctors must use cutting-edge technologies. The goal of this research paper is to develop a sophisticated model that will help physicians detect
[...] Read more.
Eye illness detection is important, yet it can be difficult and error-prone. In order to effectively and promptly diagnose eye problems, doctors must use cutting-edge technologies. The goal of this research paper is to develop a sophisticated model that will help physicians detect different eye conditions early on. These conditions include age-related macular degeneration (AMD), diabetic retinopathy, cataracts, myopia, and glaucoma. Common eye conditions include cataracts, which cloud the lens and cause blurred vision, and glaucoma, which can cause vision loss due to damage to the optic nerve. The two conditions that could cause blindness if treatment is not received are age-related macular degeneration (AMD) and diabetic retinopathy, a side effect of diabetes that destroys the blood vessels in the retina. Problems include myopic macular degeneration, glaucoma, and retinal detachment—severe types of nearsightedness that are typically defined as having a refractive error of –5 diopters or higher—are also more likely to occur in people with high myopia. We intend to apply a user-friendly approach that will allow for faster and more efficient examinations. Our research attempts to streamline the eye examination procedure, making it simpler and more accessible than traditional hospital approaches. Our goal is to use deep learning and machine learning to develop an extremely accurate model that can assess medical images, such as eye retinal scans. This was accomplished by using a huge dataset to train the machine learning and deep learning model, as well as sophisticated image processing techniques to assist the algorithm in identifying patterns of various eye illnesses. Following training, we discovered that the CNN, VggNet, MobileNet, and hybrid Deep Learning models outperformed the SVM and Random Forest machine learning models in terms of accuracy, achieving above 98%. Therefore, our model could assist physicians in enhancing patient outcomes, raising survival rates, and creating more effective treatment plans for patients with these illnesses.
Full article
(This article belongs to the Special Issue Computational Medical Image Analysis—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Oscillation Flow of Viscous Electron Fluids in Conductors of Rectangular Cross-Section
by
Andriy A. Avramenko, Igor V. Shevchuk, Nataliia P. Dmitrenko, Andriy I. Tyrinov, Yiliia Y. Kovetska and Andriy S. Kobzar
Computation 2025, 13(4), 90; https://doi.org/10.3390/computation13040090 - 1 Apr 2025
Abstract
►▼
Show Figures
The article presents results of an analytical and numerical modeling of electron fluid motion and heat generation in a rectangular conductor at an alternating electric potential. The analytical solution is based on the series expansion solution (Fourier method) and double series solution (method
[...] Read more.
The article presents results of an analytical and numerical modeling of electron fluid motion and heat generation in a rectangular conductor at an alternating electric potential. The analytical solution is based on the series expansion solution (Fourier method) and double series solution (method of eigenfunction decomposition). The numerical solution is based on the lattice Boltzmann method (LBM). An analytical solution for the electric current was obtained. This enables estimating the heat generation in the conductor and determining the influence of the parameters characterizing the conductor dimensions, the parameter M (phenomenological transport time describing momentum-nonconserving collisions), the Knudsen number (mean free path for momentum-nonconserving) and the Sh number (frequency) on the heat generation rate as an electron flow passes through a conductor.
Full article

Figure 1
Open AccessArticle
Invariance of Stationary Distributions of Exponential Networks with Prohibitions and Determination of Maximum Prohibitions
by
Gurami Tsitsiashvili and Marina Osipova
Computation 2025, 13(4), 89; https://doi.org/10.3390/computation13040089 - 1 Apr 2025
Abstract
The paper considers queuing networks with prohibitions on transitions between network nodes that determine the protocol of their operation. In the graph of transient network intensities, a set of base vertices is allocated (proportional to the number of edges), and we raise the
[...] Read more.
The paper considers queuing networks with prohibitions on transitions between network nodes that determine the protocol of their operation. In the graph of transient network intensities, a set of base vertices is allocated (proportional to the number of edges), and we raise the question of whether some subset of it can be deleted such that the stationary distribution of the Markov process describing the functioning of the network is preserved. In order for this condition to be fulfilled, it is sufficient that the set of vertices of the graph of transient intensities, after the removal of a subset of the base vertices, coincide with the set of states of the Markov process and that this graph be connected. It is proved that the ratio of the number of remaining base vertices to their total number n converges to one-half for In this paper, we are looking for graphs of transient intensities with a minimum (in some sense) set of edges for open and closed service networks.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
MedMAE: A Self-Supervised Backbone for Medical Imaging Tasks
by
Anubhav Gupta, Islam Osman, Mohamed S. Shehata, W. John Braun and Rebecca E. Feldman
Computation 2025, 13(4), 88; https://doi.org/10.3390/computation13040088 - 1 Apr 2025
Abstract
Medical imaging tasks are very challenging due to the lack of publicly available labeled datasets. Hence, it is difficult to achieve high performance with existing deep learning models as they require a massive labeled dataset to be trained effectively. An alternative solution is
[...] Read more.
Medical imaging tasks are very challenging due to the lack of publicly available labeled datasets. Hence, it is difficult to achieve high performance with existing deep learning models as they require a massive labeled dataset to be trained effectively. An alternative solution is to use pre-trained models and fine-tune them using a medical imaging dataset. However, all existing models are pre-trained using natural images, which represent a different domain from that of medical imaging; this leads to poor performance due to domain shift. To overcome these problems, we propose a pre-trained backbone using a collected medical imaging dataset with a self-supervised learning tool called a masked autoencoder. This backbone can be used as a pre-trained model for any medical imaging task, as it is trained to learn a visual representation of different types of medical images. To evaluate the performance of the proposed backbone, we use four different medical imaging tasks. The results are compared with existing pre-trained models. These experiments show the superiority of our proposed backbone in medical imaging tasks.
Full article
(This article belongs to the Special Issue Computational Medical Image Analysis—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Subsequential Continuity in Neutrosophic Metric Space with Applications
by
Vishal Gupta, Nitika Garg and Rahul Shukla
Computation 2025, 13(4), 87; https://doi.org/10.3390/computation13040087 - 25 Mar 2025
Abstract
This paper introduces two concepts, subcompatibility and subsequential continuity, which are, respectively, weaker than the existing concepts of occasionally weak compatibility and reciprocal continuity. These concepts are studied within the framework of neutrosophic metric spaces. Using these ideas, a common fixed point theorem
[...] Read more.
This paper introduces two concepts, subcompatibility and subsequential continuity, which are, respectively, weaker than the existing concepts of occasionally weak compatibility and reciprocal continuity. These concepts are studied within the framework of neutrosophic metric spaces. Using these ideas, a common fixed point theorem is developed for a system involving four maps. Furthermore, the results are applied to solve the Volterra integral equation, demonstrating the practical use of these findings in neutrosophic metric spaces.
Full article
(This article belongs to the Special Issue Nonlinear System Modelling and Control)
Open AccessArticle
A Machine Learning-Based Computational Methodology for Predicting Acute Respiratory Infections Using Social Media Data
by
Jose Manuel Ramos-Varela, Juan C. Cuevas-Tello and Daniel E. Noyola
Computation 2025, 13(4), 86; https://doi.org/10.3390/computation13040086 - 25 Mar 2025
Abstract
We study the relationship between tweets referencing Acute Respiratory Infections (ARI) or COVID-19 symptoms and confirmed cases of these diseases. Additionally, we propose a computational methodology for selecting and applying Machine Learning (ML) algorithms to predict public health indicators using social media data.
[...] Read more.
We study the relationship between tweets referencing Acute Respiratory Infections (ARI) or COVID-19 symptoms and confirmed cases of these diseases. Additionally, we propose a computational methodology for selecting and applying Machine Learning (ML) algorithms to predict public health indicators using social media data. To achieve this, a novel pipeline was developed, integrating three distinct models to predict confirmed cases of ARI and COVID-19. The dataset contains tweets related to respiratory diseases, published between 2020 and 2022 in the state of San Luis Potosí, Mexico, obtained via the Twitter API (now X). The methodology is composed of three stages, and it involves tools such as Dataiku and Python with ML libraries. The first two stages focuses on identifying the best-performing predictive models, while the third stage includes Natural Language Processing (NLP) algorithms for tweet selection. One of our key findings is that tweets contributed to improved predictions of ARI confirmed cases but did not enhance COVID-19 time series predictions. The best-performing NLP approach is the combination of Word2Vec algorithm with the KMeans model for tweet selection. Furthermore, predictions for both time series improved by 3% in the second half of 2020 when tweets were included as a feature, where the best prediction algorithm is DeepAR.
Full article
(This article belongs to the Special Issue Feature Papers in Computational Biology)
►▼
Show Figures

Figure 1
Open AccessArticle
Numerical Analysis of the Impact of Variable Borer Miner Operating Modes on the Microclimate in Potash Mine Working Areas
by
Lev Levin, Mikhail Semin, Stanislav Maltsev, Roman Luzin and Andrey Sukhanov
Computation 2025, 13(4), 85; https://doi.org/10.3390/computation13040085 - 24 Mar 2025
Abstract
This paper addresses the numerical simulation of unsteady, non-isothermal ventilation in a dead-end mine working of a potash mine excavated using a borer miner. During its operations, airflow can become unsteady due to the variable operating modes of the borer miner, the switching
[...] Read more.
This paper addresses the numerical simulation of unsteady, non-isothermal ventilation in a dead-end mine working of a potash mine excavated using a borer miner. During its operations, airflow can become unsteady due to the variable operating modes of the borer miner, the switching on and off of its motor cooling fans, and the movement of a shuttle car transporting ore. While steady ventilation in a dead-end working with a borer miner has been previously studied, the specific features of air microclimate parameter distribution in more complex and realistic unsteady scenarios remain unexplored. Our experimental studies reveal that over time, air velocity and, particularly, air temperature experience significant fluctuations. In this study, we develop and parameterize a mathematical model and perform a series of numerical simulations of unsteady heat and mass transfer in a dead-end working. These simulations account for the switching on and off of the borer miner’s fans and the movement of the shuttle car. The numerical model is calibrated using data from our experiments conducted in a potash mine. The analysis of the first factor is carried out by examining two extreme scenarios under steady-state ventilation conditions, while the second factor is analyzed within a fully unsteady framework using a dynamic mesh approach in the ANSYS Fluent 2021 R2. The numerical results demonstrate that the borer miner’s operating mode notably impacts the velocity and temperature fields, with a twofold decrease in maximum velocity near the cabin after the shuttle car departed and a temperature difference of about 1–1.5 °C between extreme scenarios in the case of forcing ventilation. The unsteady simulations using the dynamic mesh approach revealed that temperature variations were primarily caused by the borer miner’s cooling system, while the moving shuttle car generated short-term aerodynamic oscillations.
Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
►▼
Show Figures

Figure 1
Open AccessArticle
Tree-Based Methods of Volatility Prediction for the S&P 500 Index
by
Marin Lolic
Computation 2025, 13(4), 84; https://doi.org/10.3390/computation13040084 - 24 Mar 2025
Abstract
Predicting asset return volatility is one of the central problems in quantitative finance. These predictions are used for portfolio construction, calculation of value at risk (VaR), and pricing of derivatives such as options. Classical methods of volatility prediction utilize historical returns data and
[...] Read more.
Predicting asset return volatility is one of the central problems in quantitative finance. These predictions are used for portfolio construction, calculation of value at risk (VaR), and pricing of derivatives such as options. Classical methods of volatility prediction utilize historical returns data and include the exponentially weighted moving average (EWMA) and generalized autoregressive conditional heteroskedasticity (GARCH). These approaches have shown significantly higher rates of predictive accuracy than corresponding methods of return forecasting, but they still have vast room for improvement. In this paper, we propose and test several methods of volatility forecasting on the S&P 500 Index using tree ensembles from machine learning, namely random forest and gradient boosting. We show that these methods generally outperform the classical approaches across a variety of metrics on out-of-sample data. Finally, we use the unique properties of tree-based ensembles to assess what data can be particularly useful in predicting asset return volatility.
Full article
(This article belongs to the Special Issue Quantitative Finance and Risk Management Research: 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
A High-Order Hybrid Approach Integrating Neural Networks and Fast Poisson Solvers for Elliptic Interface Problems
by
Yiming Ren and Shan Zhao
Computation 2025, 13(4), 83; https://doi.org/10.3390/computation13040083 - 23 Mar 2025
Cited by 1
Abstract
►▼
Show Figures
A new high-order hybrid method integrating neural networks and corrected finite differences is developed for solving elliptic equations with irregular interfaces and discontinuous solutions. Standard fourth-order finite difference discretization becomes invalid near such interfaces due to the discontinuities and requires corrections based on
[...] Read more.
A new high-order hybrid method integrating neural networks and corrected finite differences is developed for solving elliptic equations with irregular interfaces and discontinuous solutions. Standard fourth-order finite difference discretization becomes invalid near such interfaces due to the discontinuities and requires corrections based on Cartesian derivative jumps. In traditional numerical methods, such as the augmented matched interface and boundary (AMIB) method, these derivative jumps can be reconstructed via additional approximations and are solved together with the unknown solution in an iterative procedure. Nontrivial developments have been carried out in the AMIB method in treating sharply curved interfaces, which, however, may not work for interfaces with geometric singularities. In this work, machine learning techniques are utilized to directly predict these Cartesian derivative jumps without involving the unknown solution. To this end, physics-informed neural networks (PINNs) are trained to satisfy the jump conditions for both closed and open interfaces with possible geometric singularities. The predicted Cartesian derivative jumps can then be integrated in the corrected finite differences. The resulting discrete Laplacian can be efficiently solved by fast Poisson solvers, such as fast Fourier transform (FFT) and geometric multigrid methods, over a rectangular domain with Dirichlet boundary conditions. This hybrid method is both easy to implement and efficient. Numerical experiments in two and three dimensions demonstrate that the method achieves fourth-order accuracy for the solution and its derivatives.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Axioms, Computation, Fractal Fract, Mathematics, Symmetry
Fractional Calculus: Theory and Applications, 2nd Edition
Topic Editors: António Lopes, Liping Chen, Sergio Adriani David, Alireza AlfiDeadline: 31 May 2025
Topic in
Axioms, Computation, Entropy, MCA, Mathematics, Symmetry
Numerical Methods for Partial Differential Equations
Topic Editors: Pengzhan Huang, Yinnian HeDeadline: 30 June 2025
Topic in
Applied Sciences, Computation, Entropy, J. Imaging, Optics
Color Image Processing: Models and Methods (CIP: MM)
Topic Editors: Giuliana Ramella, Isabella TorcicolloDeadline: 30 July 2025
Topic in
Algorithms, Computation, Mathematics, Molecules, Symmetry, Nanomaterials, Materials
Advances in Computational Materials Sciences
Topic Editors: Cuiying Jian, Aleksander CzekanskiDeadline: 30 September 2025

Conferences
Special Issues
Special Issue in
Computation
Computational Social Science and Complex Systems—2nd Edition
Guest Editors: Minzhang Zheng, Pedro ManriqueDeadline: 30 April 2025
Special Issue in
Computation
Advances in Crash Simulations: Modeling, Analysis, and Applications
Guest Editors: Andrés Amador Garcia-Granada, Hirpa G. LemuDeadline: 30 April 2025
Special Issue in
Computation
Mathematical Modeling and Study of Nonlinear Dynamic Processes
Guest Editor: Alexander PchelintsevDeadline: 30 April 2025
Special Issue in
Computation
Computational Methods in Structural Engineering
Guest Editors: Manolis Georgioudakis, Vagelis Plevris, Mahdi KioumarsiDeadline: 31 May 2025