Journal Description
Computation
Computation
is a peer-reviewed journal of computational science and engineering published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), CAPlus / SciFinder, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q2 (Mathematics, Interdisciplinary Applications) / CiteScore - Q1 (Applied Mathematics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 14.8 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Mathematics and Its Applications: AppliedMath, Axioms, Computation, Fractal and Fractional, Geometry, International Journal of Topology, Logics, Mathematics and Symmetry.
Impact Factor:
1.9 (2024);
5-Year Impact Factor:
1.9 (2024)
Latest Articles
A Physics-Informed Neural Network Aided Venturi–Microwave Co-Sensing Method for Three-Phase Metering
Computation 2026, 14(1), 12; https://doi.org/10.3390/computation14010012 - 5 Jan 2026
Abstract
►
Show Figures
Addressing the challenges of online measurement of oil-gas-water three-phase flow under high gas–liquid ratio (GVF > 90%) conditions (fire-driven mining, gas injection mining, natural gas mining), which rely heavily on radioactive sources, this study proposes an integrated, radiation-source-free three-phase measurement scheme utilizing a
[...] Read more.
Addressing the challenges of online measurement of oil-gas-water three-phase flow under high gas–liquid ratio (GVF > 90%) conditions (fire-driven mining, gas injection mining, natural gas mining), which rely heavily on radioactive sources, this study proposes an integrated, radiation-source-free three-phase measurement scheme utilizing a “Venturi tube-microwave resonator”. Additionally, a physics-informed neural network (PINN) is introduced to predict the volumetric flow rate of oil-gas-water three-phase flow. Methodologically, the main features are the Venturi differential pressure signal ( ) and microwave resonance amplitude ( ). A PINN model is constructed by embedding an improved L-M model, a cross-sectional water content model, and physical constraint equations into the loss function, thereby maintaining physical consistency and generalization ability under small sample sizes and across different operating conditions. Through experiments on oil-gas-water three-phase flow, the PINN model is compared with an artificial neural network (ANN) and a support vector machine (SVM). The results showed that under high gas–liquid ratio conditions (GVF > 90%), the relative errors (REL) of PINN in predicting the volumetric flow rates of oil, gas, and water were 0.1865, 0.0397, and 0.0619, respectively, which were better than ANN and SVM, and the output met physical constraints. The results indicate that under current laboratory conditions and working conditions, the PINN model has good performance in predicting the flow rate of oil-gas-water three-phase flow. However, in order to apply it to the field in the future, experiments with a wider range of working conditions and long-term stability testing should be conducted. This study provides a new technological solution for developing three-phase measurement and machine learning models that are radiation-free, real-time, and engineering-feasible.
Full article
Open AccessArticle
A Hybrid Gradient-Based Optimiser for Solving Complex Engineering Design Problems
by
Jamal Zraqou, Riyad Alrousan, Zaid Khrisat, Faten Hamad, Niveen Halalsheh and Hussam Fakhouri
Computation 2026, 14(1), 11; https://doi.org/10.3390/computation14010011 - 4 Jan 2026
Abstract
This paper proposes JADEGBO, a hybrid gradient-based metaheuristic for solving complex single- and multi-constraint engineering design problems as well as cost-sensitive security optimisation tasks. The method combines Adaptive Differential Evolution with Optional External Archive (JADE), which provides self-adaptive exploration through p-best mutation,
[...] Read more.
This paper proposes JADEGBO, a hybrid gradient-based metaheuristic for solving complex single- and multi-constraint engineering design problems as well as cost-sensitive security optimisation tasks. The method combines Adaptive Differential Evolution with Optional External Archive (JADE), which provides self-adaptive exploration through p-best mutation, an external archive, and success-based parameter learning, with the Gradient-Based Optimiser (GBO), which contributes Newton-inspired gradient search rules and a local escaping operator. In the proposed scheme, JADE is first employed to discover promising regions of the search space, after which GBO performs an intensified local refinement of the best individuals inherited from JADE. The performance of JADEGBO is assessed on the CEC2017 single-objective benchmark suite and compared against a broad set of classical and recent metaheuristics. Statistical indicators, convergence curves, box plots, histograms, sensitivity analyses, and scatter plots show that the hybrid typically attains the best or near-best mean fitness, exhibits low run-to-run variance, and maintains a favourable balance between exploration and exploitation across rotated, shifted, and composite landscapes. To demonstrate practical relevance, JADEGBO is further applied to the following four well-known constrained engineering design problems: welded beam, pressure vessel, speed reducer, and three-bar truss design. The algorithm consistently produces feasible high-quality designs and closely matches or improves upon the best reported results while keeping computation time competitive.
Full article
(This article belongs to the Section Computational Engineering)
Open AccessArticle
Do LLMs Speak BPMN? An Evaluation of Their Process Modeling Capabilities Based on Quality Measures
by
Panagiotis Drakopoulos, Panagiotis Malousoudis, Nikolaos Nousias, George Tsakalidis and Kostas Vergidis
Computation 2026, 14(1), 10; https://doi.org/10.3390/computation14010010 - 4 Jan 2026
Abstract
Large Language Models (LLMs) are emerging as powerful tools for automating business process modeling, promising to streamline the translation of textual process descriptions into Business Process Model and Notation (BPMN) diagrams. However, the extent to which these Al systems can produce high-quality BPMN
[...] Read more.
Large Language Models (LLMs) are emerging as powerful tools for automating business process modeling, promising to streamline the translation of textual process descriptions into Business Process Model and Notation (BPMN) diagrams. However, the extent to which these Al systems can produce high-quality BPMN models has not yet been rigorously evaluated. This paper presents an early evaluation of five LLM-powered BPMN generation tools that automatically convert textual process descriptions into BPMN models. To assess the external quality of these Al-generated models, we introduce a novel structured evaluation framework that scores each BPMN diagram across three key process model quality dimensions: clarity, correctness, and completeness, covering both accuracy and diagram understandability. Using this framework, we conducted experiments where each tool was tasked with modeling the same set of textual process scenarios, and the resulting diagrams were systematically scored based on the criteria. This approach provides a consistent and repeatable evaluation procedure and offers a new lens for comparing LLM-based modeling capabilities. Given the focused scope of the study, the results should be interpreted as an exploratory benchmark that surfaces initial observations about tool performance rather than definitive conclusions. Our findings reveal that while current LLM-based tools can produce BPMN diagrams that capture the main elements of a process description, they often exhibit errors such as missing steps, inconsistent logic, or modeling rule violations, highlighting limitations in achieving fully correct and complete models. The clarity and readability of the generated diagrams also vary, indicating that these Al models are still maturing in generating easily interpretable process flows. We conclude that although LLMs show promise in automating BPMN modeling, significant improvements are needed for them to consistently generate both syntactically and semantically valid process models.
Full article
(This article belongs to the Special Issue Applied Large Language Models for Science, Engineering, and Mathematics: Reasoning, Reliability and Efficient Systems)
Open AccessCommunication
A Consumer Digital Twin for Energy Demand Prediction: Development and Implementation Under the SENDER Project (HORIZON 2020)
by
Dimitra Douvi, Eleni Douvi, Jason Tsahalis and Haralabos-Theodoros Tsahalis
Computation 2026, 14(1), 9; https://doi.org/10.3390/computation14010009 (registering DOI) - 3 Jan 2026
Abstract
This paper presents the development and implementation of a consumer Digital Twin (DT) for energy demand prediction under the SENDER (Sustainable Consumer Engagement and Demand Response) project, funded by HORIZON 2020. This project aims to engage consumers in the energy sector with innovative
[...] Read more.
This paper presents the development and implementation of a consumer Digital Twin (DT) for energy demand prediction under the SENDER (Sustainable Consumer Engagement and Demand Response) project, funded by HORIZON 2020. This project aims to engage consumers in the energy sector with innovative energy service applications to achieve proactive Demand Response (DR) and optimized usage of Renewable Energy Sources (RES). The proposed DT model is designed to digitally represent occupant behaviors and energy consumption patterns using Artificial Neural Networks (ANN), which enable continuous learning by processing real-time and historical data in different pilot sites and seasons. The DT development incorporates the International Energy Agency (IEA)—Energy in Buildings and Communities (EBC) Annex 66 and Drivers-Needs-Actions-Systems (DNAS) framework to standardize occupant behavior modeling. The research methodology consists of the following steps: (i) a mock-up simulation environment for three pilot sites was created, (ii) the DT was trained and calibrated using the artificial data from the previous step, and (iii) the DT model was validated with real data from the Alginet pilot site in Spain. Results showed a strong correlation between DT predictions and mock-up data, with a maximum deviation of ±2%. Finally, a set of selected Key Performance Indicators (KPIs) was defined and categorized in order to evaluate the system’s technical effectiveness.
Full article
(This article belongs to the Special Issue Experiments/Process/System Modeling/Simulation/Optimization (IC-EPSMSO 2025))
►▼
Show Figures

Graphical abstract
Open AccessArticle
Attention Bidirectional Recurrent Neural Zero-Shot Semantic Classifier for Emotional Footprint Identification
by
Karthikeyan Jagadeesan and Annapurani Kumarappan
Computation 2026, 14(1), 8; https://doi.org/10.3390/computation14010008 - 2 Jan 2026
Abstract
Exploring emotions in organization settings, particularly in feedback on organizational welfare programs, is critical for understanding employee experiences and enhancing organizational policies. Recognizing emotions from a conversation (i.e., leaving an emotional footprint) is a predominant task for a machine to comprehend the full
[...] Read more.
Exploring emotions in organization settings, particularly in feedback on organizational welfare programs, is critical for understanding employee experiences and enhancing organizational policies. Recognizing emotions from a conversation (i.e., leaving an emotional footprint) is a predominant task for a machine to comprehend the full context of the conversation. While fine-tuning of pre-trained models has invariably provided state-of-the-art results in emotion footprint recognition tasks, the prospect of a zero-shot learned model in this sphere is, on the whole, unexplored. The objective here remains to identify the emotional footprint of the members participating in the conversation after the conversation is over with improved accuracy, time and minimal error rate. To address these gaps, in this work, a method called Attention Bidirectional Recurrent Neural Zero-Shot Semantic Classifier (ABRN-ZSSC) for emotional footprint identification is proposed. The ABRN-ZSSC for emotional footprint identification is split into two sections. First, the raw data from a Two-Party Conversation with Emotional Footprint and Emotional Intensity are subjected to the Attention Bidirectional Recurrent Neural Network model with the intent of identifying the emotional footprint for each party near the conclusion of the conversation and, second, with the identified emotional footprint in a conversation. The Zero-Shot Learning-based classifier is applied to train and classify emotions both accurately and precisely. We verify the utility of these approaches (i.e., emotional footprint identification and classification) by performing an extensive experimental evaluation on two corpora on four aspects, training time, accuracy, precision, and error rate for varying samples. Experimental results demonstrate that the ABRN-ZSSC method outperforms two existing baseline models in emotion inference tasks across the dataset. An outcome of the proposed ABRN-ZSSC method is that it obtains superior performance in terms of 10% precision, 17% accuracy and 8% recall as well as 19% training time and 18% error rate compared to the conventional methods.
Full article
(This article belongs to the Section Computational Social Science)
►▼
Show Figures

Figure 1
Open AccessArticle
Multiphysics Modelling and Experimental Validation of Road Tanker Dynamics: Stress Analysis and Material Characterization
by
Conor Robb, Gasser Abdelal, Pearse McKeefry and Conor Quinn
Computation 2026, 14(1), 7; https://doi.org/10.3390/computation14010007 (registering DOI) - 2 Jan 2026
Abstract
Crossland Tankers is a leading manufacturer of bulk-load road tankers in Northern Ireland. These tankers transport up to forty thousand litres of liquid over long distances across diverse road conditions. Liquid sloshing within the tank has a significant impact on driveability and the
[...] Read more.
Crossland Tankers is a leading manufacturer of bulk-load road tankers in Northern Ireland. These tankers transport up to forty thousand litres of liquid over long distances across diverse road conditions. Liquid sloshing within the tank has a significant impact on driveability and the tanker’s lifespan. This study introduces a novel Multiphysics model combining Smooth Particle Hydrodynamics (SPH) and Finite Element Analysis (FEA) to simulate fluid–structure interactions in a full-scale road tanker, validated with real-world road test data. The model reveals high-stress zones under braking and turning, with peak stresses at critical chassis locations, offering design insights for weight reduction and enhanced safety. Results demonstrate the approach’s effectiveness in optimising tanker design, reducing prototyping costs, and improving longevity, providing a valuable computational tool for industry applications.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessReview
Advances in Single-Cell Sequencing for Understanding and Treating Kidney Disease
by
Jose L. Agraz, Amit Verma and Claudia M. Agraz
Computation 2026, 14(1), 6; https://doi.org/10.3390/computation14010006 - 2 Jan 2026
Abstract
The fields of medical diagnostics, nephrology, and the sequencing of cellular genetic material are pivotal for precise quantification of kidney diseases. Single-cell sequencing, enhanced by automation and software tools, enables efficient examination of biopsies at the individual cell level. This approach shows the
[...] Read more.
The fields of medical diagnostics, nephrology, and the sequencing of cellular genetic material are pivotal for precise quantification of kidney diseases. Single-cell sequencing, enhanced by automation and software tools, enables efficient examination of biopsies at the individual cell level. This approach shows the complex cellular mosaic that shapes organ function. By quantifying gene expression following injury, single-cell analysis provides insight into disease progression. In this review, new developments in single-cell analysis methods, spatial integration of single-cell analysis, single-nucleus RNA sequencing, and emerging methods, including expression quantitative trait loci, whole-genome sequencing, and whole-exome sequencing in nephrology, are discussed. These advancements are poised to enhance kidney disease diagnostic processes, therapeutic strategies, and patient prognosis.
Full article
(This article belongs to the Special Issue Integrative Computational Methods for Second-and Third-Generation Sequencing Data)
►▼
Show Figures

Figure 1
Open AccessArticle
SARIMA vs. Prophet: Comparative Efficacy in Forecasting Traffic Accidents Across Ecuadorian Provinces
by
Wilson Chango, Ana Salguero, Tatiana Landivar, Roberto Vásconez, Geovanny Silva, Pedro Peñafiel-Arcos, Lucía Núñez and Homero Velasteguí-Izurieta
Computation 2026, 14(1), 5; https://doi.org/10.3390/computation14010005 - 31 Dec 2025
Abstract
This study aimed to evaluate the comparative predictive efficacy of the SARIMA statistical model and the Prophet machine learning model for forecasting monthly traffic accidents across the 24 provinces of Ecuador, addressing a critical research gap in model selection for geographically and socioeconomically
[...] Read more.
This study aimed to evaluate the comparative predictive efficacy of the SARIMA statistical model and the Prophet machine learning model for forecasting monthly traffic accidents across the 24 provinces of Ecuador, addressing a critical research gap in model selection for geographically and socioeconomically heterogeneous regions. By integrating classical time series modeling with algorithmic decomposition techniques, the research sought to determine whether a universally superior model exists or if predictive performance is inherently context-dependent. Monthly accident data from January 2013 to June 2025 were analyzed using a rolling-window evaluation framework. Model accuracy was assessed through Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) metrics to ensure consistency and comparability across provinces. The results revealed a global tie, with 12 provinces favoring SARIMA and 12 favoring Prophet, indicating the absence of a single dominant model. However, regional patterns of superiority emerged: Prophet achieved exceptional precision in coastal and urban provinces with stationary and high-volume time series—such as Guayas, which recorded the lowest MAPE (4.91%)—while SARIMA outperformed Prophet in the Andean highlands, particularly in non-stationary, medium-to-high-volume provinces such as Tungurahua (MAPE 6.07%) and Pichincha (MAPE 13.38%). Computational instability in MAPE was noted for provinces with extremely low accident counts (e.g., Galápagos, Carchi), though RMSE values remained low, indicating a metric rather than model limitation. Overall, the findings invalidate the notion of a universally optimal model and underscore the necessity of adopting adaptive, region-specific modeling frameworks that account for local geographic, demographic, and structural factors in predictive road safety analytics.
Full article
(This article belongs to the Topic Intelligent Optimization Algorithm: Theory and Applications)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Experimental and Numerical Investigation of Hydrodynamic Characteristics of Aquaculture Nets: The Critical Role of Solidity Ratio in Biofouling Assessment
by
Wei Liu, Lei Wang, Yongli Liu, Yuyan Li, Guangrui Qi and Dawen Mao
Computation 2026, 14(1), 4; https://doi.org/10.3390/computation14010004 - 30 Dec 2025
Abstract
►▼
Show Figures
Biofouling on aquaculture netting increases hydrodynamic drag and restricts water exchange across net cages. The solidity ratio is introduced as a quantitative parameter to characterize fouling severity. Towing tank experiments and computational fluid dynamics (CFD) simulations were used to assess the hydrodynamic behavior
[...] Read more.
Biofouling on aquaculture netting increases hydrodynamic drag and restricts water exchange across net cages. The solidity ratio is introduced as a quantitative parameter to characterize fouling severity. Towing tank experiments and computational fluid dynamics (CFD) simulations were used to assess the hydrodynamic behavior of netting under different fouling conditions. Experimental results indicated a nonlinear increase in drag force with increasing solidity. At a flow velocity of 0.90 m/s, the drag force increased by 112.2%, 195.1%, and 295.7% for netting with solidity ratios of 0.445, 0.733, and 0.787, respectively, compared to clean netting (Sn = 0.211). The drag coefficient remained stable within 1.445–1.573 across Re of 995–2189. Numerical simulations demonstrated the evolution of flow fields around netting, including jet flow formation in mesh openings and reverse flow regions and vortex structures behind knots. Under high solidity (Sn = 0.733–0.787), complex wake patterns such as dual-peak vortex streets appeared. Therefore, this study confirmed that the solidity ratio is an effective comprehensive parameter for evaluating biofouling effects, providing a theoretical basis for antifouling design and cleaning strategy development for aquaculture cages.
Full article

Figure 1
Open AccessArticle
Reaction-Diffusion Model of CAR-T Cell Therapy in Solid Tumours with Antigen Escape
by
Maxim V. Polyakov and Elena I. Tuchina
Computation 2026, 14(1), 3; https://doi.org/10.3390/computation14010003 (registering DOI) - 30 Dec 2025
Abstract
Developing effective CAR-T cell therapy for solid tumours remains challenging because of biological barriers such as antigen escape and an immunosuppressive microenvironment. The aim of this study is to develop a mathematical model of the spatio-temporal dynamics of tumour processes in order to
[...] Read more.
Developing effective CAR-T cell therapy for solid tumours remains challenging because of biological barriers such as antigen escape and an immunosuppressive microenvironment. The aim of this study is to develop a mathematical model of the spatio-temporal dynamics of tumour processes in order to assess key factors that limit treatment efficacy. We propose a reaction–diffusion model described by a system of partial differential equations for the densities of tumour cells and CAR-T cells, the concentration of immune inhibitors, and the degree of antigen escape. The methods of investigation include stability analysis and numerical solution of the model using a finite-difference scheme. The simulations show that antigen escape produces a resistant tumour core and relapse after an initial regression; increasing the escape rate from to increases the final tumour volume at days from approximately 35.3 a.u. to 36.2 a.u. Parameter mapping further indicates that for tumour control can be achieved at moderate killing rates ( ), whereas for comparable control requires – . Repeated CAR-T administration improves durability: the residual normalised tumour volume at days decreases from approximately 4.5 after a single infusion to approximately 0.9 (double) and approximately 0.5 (triple), with a saturating benefit for further intensification. We conclude that the proposed model is a valuable tool for analysing and optimising CAR-T therapy protocols, and that our results highlight the need for combined strategies aimed at overcoming antigen escape.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Figure 1
Open AccessArticle
An Interpretable Artificial Intelligence Approach for Reliability and Regulation-Aware Decision Support in Power Systems
by
Diego Armando Pérez-Rosero, Santiago Pineda-Quintero, Juan Carlos Álvarez-Barreto, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Computation 2026, 14(1), 2; https://doi.org/10.3390/computation14010002 - 21 Dec 2025
Abstract
Modern medium-voltage (MV) distribution networks face increasing reliability challenges driven by aging assets, climate variability, and evolving operational demands. In Colombia and across Latin America, reliability metrics, such as the System Average Interruption Frequency Index (SAIFI), standardized under IEEE 1366, serve as key
[...] Read more.
Modern medium-voltage (MV) distribution networks face increasing reliability challenges driven by aging assets, climate variability, and evolving operational demands. In Colombia and across Latin America, reliability metrics, such as the System Average Interruption Frequency Index (SAIFI), standardized under IEEE 1366, serve as key indicators for regulatory compliance and service quality. However, existing analytical approaches struggle to jointly deliver predictive accuracy, interpretability, and traceability required for regulated environments. Here, we introduce CRITAIR (Criticality Analysis through Interpretable Artificial Intelligence-based Recommendations), an integrated framework that combines predictive modeling, explainable analytics, and regulation-aware reasoning to enhance reliability management in MV networks. CRITAIR unifies three components: (i) a TabNet-based predictive module that estimates SAIFI using outage, asset, and meteorological data while producing global and local attributions; (ii) an agentic retrieval-and-reasoning stage that grounds recommendations in regulatory evidence from RETIE and NTC 2050; and (iii) interpretable reasoning graphs that map decision pathways. Evaluations conducted on real operational data demonstrate that CRITAIR achieves competitive predictive performance—comparable to Random Forest and XGBoost—while maintaining transparency through sparse attention and sequential feature explainability. Also, our regulation-aware reasoning module exhibits coherent and verifiable recommendations, achieving high semantic alignment scores (BERTScore) and expert-rated interpretability. Overall, CRITAIR bridges the gap between predictive analytics and regulatory governance, offering a transparent, auditable, and deployment-ready solution for digital transformation in electric distribution systems.
Full article
(This article belongs to the Special Issue Smart Analytics for Future Energy Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Enhanced Chimp Algorithm and Its Application in Optimizing Real-World Data and Engineering Design Problems
by
Hussam N. Fakhouri, Riyad Alrousan, Hasan Rashaideh, Faten Hamad and Zaid Khrisat
Computation 2026, 14(1), 1; https://doi.org/10.3390/computation14010001 - 20 Dec 2025
Abstract
This work proposes an Enhanced Chimp Optimization Algorithm (EChOA) for solving continuous and constrained data science and engineering optimization problems. The EChOA integrates a self-adaptive DE/current-to-pbest/1 (with jDE-style parameter control) variation stage with the canonical four-leader ChOA guidance and augments the search with
[...] Read more.
This work proposes an Enhanced Chimp Optimization Algorithm (EChOA) for solving continuous and constrained data science and engineering optimization problems. The EChOA integrates a self-adaptive DE/current-to-pbest/1 (with jDE-style parameter control) variation stage with the canonical four-leader ChOA guidance and augments the search with three lightweight modules: (i) L’evy flight refinement around the incumbent best, (ii) periodic elite opposition-based learning, and (iii) stagnation-aware partial restarts. The EChOA is compared with more than 35 optimizers on the CEC2022 single-objective suite (12 functions). The results shows that the EChOA attains state-of-the-art results at both and . At , it ranks first on all functions (average rank ; wins) with the lowest mean objective and the smallest dispersion relative to the strongest competitor (OMA). At , the EChOA retains the best overall rank and achieves top scores on most functions, indicating stable scalability with problem dimension. Pairwise Wilcoxon signed-rank tests ( ) against the full competitor set corroborate statistical superiority on the majority of functions at both dimensions, aligning with the aggregate rank outcomes. Population size studies indicate that larger populations primarily enhance reliability and time to improvement while yielding similar terminal accuracy under a fixed iteration budget. Four constrained engineering case studies (including welded beam, helical spring, pressure vessel, and cantilever stepped beam) further confirm practical effectiveness, with consistently low cost/weight/volume and tight dispersion.
Full article
Open AccessArticle
A Generalizable Agentic AI Pipeline for Developing Chatbots Using Small Language Models: A Case Study on Thai Student Loan Fund Services
by
Jakkaphong Inpun, Watcharaporn Cholamjiak, Piyada Phrueksawatnon and Kanokwatt Shiangjen
Computation 2025, 13(12), 297; https://doi.org/10.3390/computation13120297 - 18 Dec 2025
Abstract
The rising deployment of artificial intelligence in public services is constrained by computational costs and limited domain-specific data, particularly in multilingual contexts. This study proposes a generalizable Agentic AI pipeline for developing question–answer chatbot systems using small language models (SLMs), demonstrated through a
[...] Read more.
The rising deployment of artificial intelligence in public services is constrained by computational costs and limited domain-specific data, particularly in multilingual contexts. This study proposes a generalizable Agentic AI pipeline for developing question–answer chatbot systems using small language models (SLMs), demonstrated through a case study on the Thai Student Loan Fund (TSLF). The pipeline integrates four stages: OCR-based document digitization using Typhoon2-3B, agentic question–answer dataset construction via a clean–check–plan–generate (CCPG) workflow, parameter-efficient fine-tuning with QLoRA on Typhoon2-1B and Typhoon2-3B models, and retrieval-augmented generation (RAG) for source-grounded responses. Evaluation using BERTScore and CondBERT confirmed high semantic consistency (FBERT = 0.9807) and stylistic reliability (FBERT = 0.9839) of the generated QA corpus. Fine-tuning improved the 1B model’s domain alignment (FBERT: 0.8593 → 0.8641), while RAG integration further enhanced factual grounding (FBERT = 0.8707) and citation transparency. Cross-validation with GPT-5 and Gemini 2.5 Pro demonstrated dataset transferability and reliability. The results establish that Agentic AI combined with SLMs offers a cost-effective, interpretable, and scalable framework for automating bilingual advisory services in resource-constrained government and educational institutions.
Full article
(This article belongs to the Special Issue Generative AI in Action: Trends, Applications, and Implications)
►▼
Show Figures

Figure 1
Open AccessArticle
Ship Model Identification Using Interpretable 4-DOF Maneuverability Models for River Combat Boat
by
Juan Contreras Montes, Aldo Lovo Ayala, Daniela Ospino-Balcázar, Kevin Velasquez Gutierrez, Carlos Soto Montaño, Roosvel Soto-Diaz, Javier Jiménez-Cabas, José Oñate López and José Escorcia-Gutierrez
Computation 2025, 13(12), 296; https://doi.org/10.3390/computation13120296 - 18 Dec 2025
Abstract
Ship maneuverability models are typically defined by three degrees of freedom: surge, sway, and yaw. However, patrol vessels operating in riverine environments often exhibit significant roll motion during course changes, necessitating the inclusion of this dynamic. This study develops interpretable machine learning models
[...] Read more.
Ship maneuverability models are typically defined by three degrees of freedom: surge, sway, and yaw. However, patrol vessels operating in riverine environments often exhibit significant roll motion during course changes, necessitating the inclusion of this dynamic. This study develops interpretable machine learning models capable of predicting vessel behavior in four degrees of freedom (4-DoF): surge, sway, yaw, and roll. A dataset of 125 h of simulated maneuvers was employed, including 29 h of out-of-distribution (OOD) conditions to test model generalization. Four models were implemented and compared over a 15-step prediction horizon: linear regression, third-order polynomial regression, a state-space model obtained via the N4SID algorithm, and an AutoRegressive model with eXogenous inputs (ARX). Results demonstrate that all models captured the essential vessel dynamics, with the state-space model achieving the best overall performance (e.g., NMSE = 0.0246 for surge velocity on test data and 0.0499 under OOD conditions). Variable-wise, surge and sway showed the lowest errors, roll rate remained stable, and yaw rate was the most sensitive to distribution shifts. Model-wise, the ARX model achieved the lowest NMSE for surge prediction (0.0149), while regression-based models provided interpretable yet less accurate alternatives. Multi-horizon evaluation (1-, 5-, 15-, and 30-step) under OOD conditions confirmed a consistent monotonic degradation across models. These findings validate the feasibility of using interpretable machine learning models for predictive control, autonomous navigation, and combat scenario simulation in riverine operations.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Shared Nodes of Overlapping Communities in Complex Networks
by
Vesa Kuikka, Kosti Koistinen and Kimmo K. Kaski
Computation 2025, 13(12), 295; https://doi.org/10.3390/computation13120295 - 17 Dec 2025
Abstract
Overlapping communities are key characteristics of the structure and function analysis of complex networks. Shared or overlapping nodes within overlapping communities can either form subcommunities or act as intersections between larger communities. Nodes at the intersections that do not form subcommunities can be
[...] Read more.
Overlapping communities are key characteristics of the structure and function analysis of complex networks. Shared or overlapping nodes within overlapping communities can either form subcommunities or act as intersections between larger communities. Nodes at the intersections that do not form subcommunities can be identified as overlapping nodes or as part of an internal structure of nested communities. To identify overlapping nodes, we apply a threshold rule based on the number of nodes in the nested structure. As the threshold value increases, the number of selected overlapping nodes decreases. This approach allows us to analyse the roles of nodes considered overlapping according to selection criteria, for example, to reduce the effect of noise. We illustrate our method by using three small and two larger real-world network structures. In larger networks, minor disturbances can produce a multitude of slightly different solutions, but the core communities remain robust, allowing other variations to be treated as noise. While this study employs our own method for community detection, other approaches can also be applied. Exploring the properties of shared nodes in overlapping communities of complex networks is a novel area of research with diverse applications in social network analysis, cybersecurity, and other fields in network science.
Full article
(This article belongs to the Special Issue Computational Social Science and Complex Systems—2nd Edition)
►▼
Show Figures

Graphical abstract
Open AccessArticle
A Fast Distributed Algorithm for Uniform Price Auction with Bidding Information Protection
by
John Sum, Chi-Sing Leung and Janet C. C. Chang
Computation 2025, 13(12), 294; https://doi.org/10.3390/computation13120294 - 17 Dec 2025
Abstract
In this paper, a fast distributed algorithm is proposed for solving the winners and price determination problems in a uniform price auction in which each bidder bids for multiple units out of a lot of k identical items with a per-unit price. In
[...] Read more.
In this paper, a fast distributed algorithm is proposed for solving the winners and price determination problems in a uniform price auction in which each bidder bids for multiple units out of a lot of k identical items with a per-unit price. In a conventional setting, all bidders disclose their bidding information to an auctioneer and let the auctioneer allocate the items and determine the uniform price, i.e., the least winning price. In our setting, all bidders do not need to disclose their bidding information to the auctioneer. The bidders and the auctioneer collaboratively compute by the distributed algorithm to determine in a small number of steps the units allocated and the uniform price. The number of steps is independent of the number of bidders. At the end of the computing process, each bidder can only know the units allocated to him/her and the uniform price. The auctioneer can only know the units being allocated to the bidders and the uniform price. Therefore, neither the bidders nor the auctioneer are able to know the per-unit bidding prices of the bidders except the uniform price. Moreover, the auctioneer is not able to know the bidding units of the losing bidders. Bidders’ per-unit bidding prices are protected, and the bidding units of the losing bidders are protected. Bidding information privacy is preserved.
Full article
(This article belongs to the Section Computational Social Science)
►▼
Show Figures

Figure 1
Open AccessArticle
DPCA-GCN: Dual-Path Cross-Attention Graph Convolutional Networks for Skeleton-Based Action Recognition
by
Khadija Lasri, Khalid El Fazazy, Adnane Mohamed Mahraz, Hamid Tairi and Jamal Riffi
Computation 2025, 13(12), 293; https://doi.org/10.3390/computation13120293 - 15 Dec 2025
Abstract
Skeleton-based action recognition has achieved remarkable advances with graph convolutional networks (GCNs). However, most existing models process spatial and temporal information within a single coupled stream, which often obscures the distinct patterns of joint configuration and motion dynamics. This paper introduces the Dual-Path
[...] Read more.
Skeleton-based action recognition has achieved remarkable advances with graph convolutional networks (GCNs). However, most existing models process spatial and temporal information within a single coupled stream, which often obscures the distinct patterns of joint configuration and motion dynamics. This paper introduces the Dual-Path Cross-Attention Graph Convolutional Network (DPCA-GCN), an architecture that explicitly separates spatial and temporal modeling into two specialized pathways while maintaining rich bidirectional interaction between them. The spatial branch integrates graph convolution and spatial transformers to capture intra-frame joint relationships, whereas the temporal branch combines temporal convolution and temporal transformers to model inter-frame dependencies. A bidirectional cross-attention mechanism facilitates explicit information exchange between both paths, and an adaptive gating module balances their respective contributions according to the action context. Unlike traditional approaches that process spatial–temporal information sequentially, our dual-path design enables specialized processing while maintaining cross-modal coherence through memory-efficient chunked attention mechanisms. Extensive experiments on the NTU RGB+D 60 and NTU RGB+D 120 datasets demonstrate that DPCA-GCN achieves competitive joint-only accuracies of 88.72%/94.31% and 82.85%/83.65%, respectively, with exceptional top-5 scores of 96.97%/99.14% and 95.59%/95.96%, while maintaining significantly lower computational complexity compared to multi-modal approaches.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
mDA: Evolutionary Machine Learning Algorithm for Feature Selection in Medical Domain
by
Ibrahim Aljarah, Abdullah Alzaqebah, Nailah Al-Madi, Ala’ M. Al-Zoubi and Amro Saleh
Computation 2025, 13(12), 292; https://doi.org/10.3390/computation13120292 - 13 Dec 2025
Abstract
The rapid expansion of medical data, characterized by its complex high-dimensional attributes, presents numerous promising opportunities and substantial challenges in healthcare analytics. Adopting effective feature selection techniques is essential to take advantage of the potential of such data. This research presents a modified
[...] Read more.
The rapid expansion of medical data, characterized by its complex high-dimensional attributes, presents numerous promising opportunities and substantial challenges in healthcare analytics. Adopting effective feature selection techniques is essential to take advantage of the potential of such data. This research presents a modified algorithm called (mDA), which is the hybrid algorithm between the Evolutionary Population Dynamics and the Dragonfly Algorithm. This method combines Evolutionary Population Dynamics’s strength with the Dragonfly Algorithm’s flexible capabilities, offering a robust evolutionary machine learning approach specifically designed for medical data analysis. By integrating the dynamic population modeling of Evolutionary Population Dynamics with the adaptive search techniques of Dragonfly Algorithm, the proposed mDA significantly improves accuracy, reduces the number of features, and obtains the minimum average of the fitness scores. Comparative experiments conducted on seven diverse medical datasets against other established algorithms confirm the superior performance of the proposed mDA, establishing it as a valuable approach in examining complex medical data.
Full article
(This article belongs to the Topic Intelligent Optimization Algorithm: Theory and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Top-Down Optimization of a Multi-Physics TBS Model via Design-Change Propagation Network for Acoustic Levitation Devices
by
Yuchao Liu, Yi Gan, Fujia Sun and Yuping Long
Computation 2025, 13(12), 291; https://doi.org/10.3390/computation13120291 - 10 Dec 2025
Abstract
To address the challenges of interdependent design parameters and reliance on empirical trial-and-error in ultrasonic cell levitation culture devices, this study proposes a top-down design framework integrating multi-physics modeling with complex network analysis. First, acoustic field simulations optimize transducer arrangement and define the
[...] Read more.
To address the challenges of interdependent design parameters and reliance on empirical trial-and-error in ultrasonic cell levitation culture devices, this study proposes a top-down design framework integrating multi-physics modeling with complex network analysis. First, acoustic field simulations optimize transducer arrangement and define the cell manipulation field, establishing the Top-level Basic Structure (TBS). A skeleton model of the acoustofluidic coupled field is constructed based on the TBS. Core parameters are then determined by refining the TBS through multi-physics analysis. Second, a 24-node design change propagation network is constructed. Leveraging the TBS model coupled with multi-physics fields, a directed network model analyzes parameter interactions. The HITS algorithm is applied to prioritize the design sequence based on authority and hub scores, resolving parameter conflicts. Experimental validation demonstrates a device acoustic pressure of 1.3 × 104 Pa, stable cell levitation within the focused acoustic field, and a 40% reduction in design cycle time compared to traditional methods. This framework systematically sequences parameters, effectively determines the design order, enhances design efficiency, and significantly reduces dependence on empirical trial-and-error. It provides a novel approach for developing high-throughput organoid culture equipment.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Objective over Architecture: Fraud Detection Under Extreme Imbalance in Bank Account Opening
by
Wenxi Sun, Qiannan Shen, Yijun Gao, Qinkai Mao, Tongsong Qi and Shuo Xu
Computation 2025, 13(12), 290; https://doi.org/10.3390/computation13120290 - 9 Dec 2025
Abstract
Fraud in financial services—especially account opening fraud—poses major operational and reputational risks. Static rules struggle to adapt to evolving tactics, missing novel patterns and generating excessive false positives. Machine learning promises adaptive detection, but deployment faces severe class imbalance: in the NeurIPS 2022
[...] Read more.
Fraud in financial services—especially account opening fraud—poses major operational and reputational risks. Static rules struggle to adapt to evolving tactics, missing novel patterns and generating excessive false positives. Machine learning promises adaptive detection, but deployment faces severe class imbalance: in the NeurIPS 2022 BAF Base benchmark used here, fraud prevalence is 1.10%. Standard metrics (accuracy, f1_weighted) can look strong while doing little for the minority class. We compare Logistic Regression, SVM (RBF), Random Forest, LightGBM, and a GRU model on N = 1,000,000 accounts under a unified preprocessing pipeline. All models are trained to minimize their loss function, while configurations are selected on a stratified development set using validation-weighted F1-score f1_weighted. For the four classical models, class weighting in the loss (class_weight ) is treated as a hyperparameter and tuned. Similarly, the GRU is trained with a fixed class-weighted CrossEntropy loss that up-weights fraud cases. This ensures that both model families leverage weighted training objectives, while their final hyperparameters are consistently selected by the f1_weighted metric. Despite similar AUCs and aligned feature importance across families, the classical models converge to high-precision, low-recall solutions (1–6% fraud recall), whereas the GRU recovers 78% recall at 5% precision (AUC ). Under extreme imbalance, objective choice and operating point matter at least as much as architecture.
Full article
(This article belongs to the Special Issue Applications of Machine Learning and Data Science Methods in Social Sciences)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AppliedMath, Axioms, Computation, Mathematics, Symmetry
A Real-World Application of Chaos Theory
Topic Editors: Adil Jhangeer, Mudassar ImranDeadline: 28 February 2026
Topic in
Axioms, Computation, Fractal Fract, Mathematics, Symmetry
Fractional Calculus: Theory and Applications, 2nd Edition
Topic Editors: António Lopes, Liping Chen, Sergio Adriani David, Alireza AlfiDeadline: 30 May 2026
Topic in
Brain Sciences, NeuroSci, Applied Sciences, Mathematics, Computation
The Computational Brain
Topic Editors: William Winlow, Andrew JohnsonDeadline: 31 July 2026
Topic in
Sustainability, Remote Sensing, Forests, Applied Sciences, Computation
Artificial Intelligence, Remote Sensing and Digital Twin Driving Innovation in Sustainable Natural Resources and Ecology
Topic Editors: Huaiqing Zhang, Ting YunDeadline: 31 January 2027
Conferences
Special Issues
Special Issue in
Computation
Computational Social Science and Complex Systems—2nd Edition
Guest Editors: Minzhang Zheng, Pedro ManriqueDeadline: 31 January 2026
Special Issue in
Computation
Evolutionary Computation for Smart Grid and Energy Systems
Guest Editors: Jesús María López-Lezama, Oscar Danilo MontoyaDeadline: 31 January 2026
Special Issue in
Computation
Integrated Computer Technologies in Mechanical Engineering—Synergetic Engineering IV
Guest Editors: Oleksii Lytvynov, Volodymyr Pavlikov, Dmytro KrytskyiDeadline: 31 January 2026
Special Issue in
Computation
Applications of Machine Learning and Data Science Methods in Social Sciences
Guest Editors: Zhiyong Zhang, Xin (Cynthia) Tong, Jiashang TangDeadline: 1 February 2026




