Next Issue
Volume 13, August
Previous Issue
Volume 13, June
 
 

Computation, Volume 13, Issue 7 (July 2025) – 24 articles

Cover Story (view full-size image): This paper presents a novel topology optimization method combining a parameterized level set function with genetic algorithms. Using B-spline interpolation reduces variables, enabling global search and avoiding local minima—a common issue with traditional gradient methods. Implemented in MATLAB, the approach employs a penalty operator to improve convergence speed. It not only effectively finds the global optimum but can also generate good initial solutions for faster local methods. Tested on 2D structures, this robust technique offers a promising alternative for complex topology design. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 7123 KiB  
Article
Ionic and Electrotonic Contributions to Short-Term Ventricular Action Potential Memory: An In Silico Study
by Massimiliano Zaniboni
Computation 2025, 13(7), 175; https://doi.org/10.3390/computation13070175 - 20 Jul 2025
Viewed by 246
Abstract
Electrical restitution (ER) is a determinant of cardiac repolarization stability and can be measured as steady action potential (AP) duration (APD) at different pacing rates—the so-called dynamic restitution (ERdyn) curve—or as APD changes after pre- or post-mature stimulations—the so-called standard restitution [...] Read more.
Electrical restitution (ER) is a determinant of cardiac repolarization stability and can be measured as steady action potential (AP) duration (APD) at different pacing rates—the so-called dynamic restitution (ERdyn) curve—or as APD changes after pre- or post-mature stimulations—the so-called standard restitution (ERs1s2) curve. Short-term AP memory (Ms) has been described as the slope difference between the ERdyn and ERs1s2 curves, and represents the information stored in repolarization dynamics due to previous pacing conditions. Although previous studies have shown its dependence on ion currents and calcium cycling, a systematic picture of these features is lacking. By means of simulations with a human ventricular AP model, I show that APD restitution can be described under randomly changing pacing conditions (ERrand) and Ms derived as the slope difference between ERdyn and ERrand. Thus measured, Ms values correlate with those measured using ERs1s2. I investigate the effect on Ms of modulating the conductance of ion channels involved in AP repolarization, and of abolishing intracellular calcium transient. I show that Ms is chiefly determined by ERdyn rather than ERrand, and that interventions that shorten/prolong APD tend to decrease/increase Ms. Full article
Show Figures

Figure 1

28 pages, 8982 KiB  
Article
Decision-Level Multi-Sensor Fusion to Improve Limitations of Single-Camera-Based CNN Classification in Precision Farming: Application in Weed Detection
by Md. Nazmuzzaman Khan, Adibuzzaman Rahi, Mohammad Al Hasan and Sohel Anwar
Computation 2025, 13(7), 174; https://doi.org/10.3390/computation13070174 - 18 Jul 2025
Viewed by 265
Abstract
The United States leads in corn production and consumption in the world with an estimated USD 50 billion per year. There is a pressing need for the development of novel and efficient techniques aimed at enhancing the identification and eradication of weeds in [...] Read more.
The United States leads in corn production and consumption in the world with an estimated USD 50 billion per year. There is a pressing need for the development of novel and efficient techniques aimed at enhancing the identification and eradication of weeds in a manner that is both environmentally sustainable and economically advantageous. Weed classification for autonomous agricultural robots is a challenging task for a single-camera-based system due to noise, vibration, and occlusion. To address this issue, we present a multi-camera-based system with decision-level sensor fusion to improve the limitations of a single-camera-based system in this paper. This study involves the utilization of a convolutional neural network (CNN) that was pre-trained on the ImageNet dataset. The CNN subsequently underwent re-training using a limited weed dataset to facilitate the classification of three distinct weed species: Xanthium strumarium (Common Cocklebur), Amaranthus retroflexus (Redroot Pigweed), and Ambrosia trifida (Giant Ragweed). These weed species are frequently encountered within corn fields. The test results showed that the re-trained VGG16 with a transfer-learning-based classifier exhibited acceptable accuracy (99% training, 97% validation, 94% testing accuracy) and inference time for weed classification from the video feed was suitable for real-time implementation. But the accuracy of CNN-based classification from video feed from a single camera was found to deteriorate due to noise, vibration, and partial occlusion of weeds. Test results from a single-camera video feed show that weed classification accuracy is not always accurate for the spray system of an agricultural robot (AgBot). To improve the accuracy of the weed classification system and to overcome the shortcomings of single-sensor-based classification from CNN, an improved Dempster–Shafer (DS)-based decision-level multi-sensor fusion algorithm was developed and implemented. The proposed algorithm offers improvement on the CNN-based weed classification when the weed is partially occluded. This algorithm can also detect if a sensor is faulty within an array of sensors and improves the overall classification accuracy by penalizing the evidence from a faulty sensor. Overall, the proposed fusion algorithm showed robust results in challenging scenarios, overcoming the limitations of a single-sensor-based system. Full article
(This article belongs to the Special Issue Moving Object Detection Using Computational Methods and Modeling)
Show Figures

Figure 1

22 pages, 524 KiB  
Review
Strategic Decision-Making in SMEs: A Review of Heuristics and Machine Learning for Multi-Objective Optimization
by Gines Molina-Abril, Laura Calvet, Angel A. Juan and Daniel Riera
Computation 2025, 13(7), 173; https://doi.org/10.3390/computation13070173 - 18 Jul 2025
Viewed by 387
Abstract
Small- and medium-sized enterprises (SMEs) face dynamic and competitive environments where resilience and data-driven decision-making are critical. Despite the potential benefits of artificial intelligence (AI), machine learning (ML), and optimization techniques, SMEs often struggle to adopt these tools due to high costs, limited [...] Read more.
Small- and medium-sized enterprises (SMEs) face dynamic and competitive environments where resilience and data-driven decision-making are critical. Despite the potential benefits of artificial intelligence (AI), machine learning (ML), and optimization techniques, SMEs often struggle to adopt these tools due to high costs, limited training, and restricted hardware access. This study reviews how SMEs can employ heuristics, metaheuristics, ML, and hybrid approaches to support strategic decisions under uncertainty and resource constraints. Using bibliometric mapping with UMAP and BERTopic, 82 key works are identified and clustered into 11 thematic areas. From this, the study develops a practical framework for implementing and evaluating optimization strategies tailored to SMEs’ limitations. The results highlight critical application areas, adoption barriers, and success factors, showing that heuristics and hybrid methods are especially effective for multi-objective optimization with lower computational demands. The study also outlines research gaps and proposes future directions to foster digital transformation in SMEs. Unlike prior reviews focused on specific industries or methods, this work offers a cross-sectoral perspective, emphasizing how these technologies can strengthen SME resilience and strategic planning. Full article
(This article belongs to the Section Computational Social Science)
Show Figures

Figure 1

33 pages, 3547 KiB  
Article
Mapping the Intellectual Structure of Computational Risk Analytics in Banking and Finance: A Bibliometric and Thematic Evolution Study
by Sotirios J. Trigkas, Kanellos Toudas and Ioannis Chasiotis
Computation 2025, 13(7), 172; https://doi.org/10.3390/computation13070172 - 17 Jul 2025
Viewed by 348
Abstract
Modern financial practices introduce complex risks, which in turn force financial institutions to rely increasingly on computational risk analytics (CRA). The purpose of our research is to attempt to systematically explore the evolution and intellectual structure of CRA in banking using a detailed [...] Read more.
Modern financial practices introduce complex risks, which in turn force financial institutions to rely increasingly on computational risk analytics (CRA). The purpose of our research is to attempt to systematically explore the evolution and intellectual structure of CRA in banking using a detailed bibliometric analysis of the literature sourced from Web of Science from 2000 to 2025. A comprehensive search in the Web of Science (WoS) Core Collection yielded 1083 peer-reviewed publications, which we analyzed using analytical tools like VOSviewer 1.6.20 and Bibliometrix (Biblioshiny 5.0) so as to examine the dataset and uncover bibliometric characteristics like citation patterns, keyword occurrences, and thematic clustering. Our initial analysis results uncover the presence of key research clusters focusing on bankruptcy prediction, AI integration in financial services, and advanced deep learning applications. Furthermore, our findings note a transition of CRA from an emerging to an expanding domain, especially after 2019, with terms like machine learning (ML), artificial intelligence (AI), and deep learning (DL) being identified as prominent keywords and a recent shift towards blockchain, explainability, and financial stability being present. We believe that this study tries to address the need for an updated mapping of CRA, providing valuable insights for future academic inquiry and practical financial risk management applications. Full article
Show Figures

Figure 1

24 pages, 15627 KiB  
Article
Construction and Evaluation of a Domain-Related Risk Model for Prognosis Prediction in Colorectal Cancer
by Xiangjun Cui, Yongqiang Xing, Guoqing Liu, Hongyu Zhao and Zhenhua Yang
Computation 2025, 13(7), 171; https://doi.org/10.3390/computation13070171 - 17 Jul 2025
Viewed by 315
Abstract
Background: Epigenomic instability accelerates mutations in tumor suppressor genes and oncogenes, contributing to malignant transformation. Histone modifications, particularly methylation and acetylation, significantly influence tumor biology, with chromo-, bromo-, and Tudor domain-containing proteins mediating these changes. This study investigates how genes encoding these domain-containing [...] Read more.
Background: Epigenomic instability accelerates mutations in tumor suppressor genes and oncogenes, contributing to malignant transformation. Histone modifications, particularly methylation and acetylation, significantly influence tumor biology, with chromo-, bromo-, and Tudor domain-containing proteins mediating these changes. This study investigates how genes encoding these domain-containing proteins affect colorectal cancer (CRC) prognosis. Methods: Using CRC data from the GSE39582 and TCGA datasets, we identified domain-related genes via GeneCards and developed a prognostic signature using LASSO-COX regression. Patients were classified into high- and low-risk groups, and comparisons were made across survival, clinical features, immune cell infiltration, immunotherapy responses, and drug sensitivity predictions. Single-cell analysis assessed gene expression in different cell subsets. Results: Four domain-related genes (AKAP1, ORC1, CHAF1A, and UHRF2) were identified as a prognostic signature. Validation confirmed their prognostic value, with significant differences in survival, clinical features, immune patterns, and immunotherapy responses between the high- and low-risk groups. Drug sensitivity analysis revealed top candidates for CRC treatment. Single-cell analysis showed varied expression of these genes across cell subsets. Conclusions: This study presents a novel prognostic signature based on domain-related genes that can predict CRC severity and offer insights into immune dynamics, providing a promising tool for personalized risk assessment in CRC. Full article
Show Figures

Figure 1

23 pages, 4276 KiB  
Article
First-Principles Insights into Mo and Chalcogen Dopant Positions in Anatase, TiO2
by W. A. Chapa Pamodani Wanniarachchi, Ponniah Vajeeston, Talal Rahman and Dhayalan Velauthapillai
Computation 2025, 13(7), 170; https://doi.org/10.3390/computation13070170 - 14 Jul 2025
Viewed by 227
Abstract
This study employs density functional theory (DFT) to investigate the electronic and optical properties of molybdenum (Mo) and chalcogen (S, Se, Te) co-doped anatase TiO2. Two co-doping configurations were examined: Model 1, where the dopants are adjacent, and Model 2, where [...] Read more.
This study employs density functional theory (DFT) to investigate the electronic and optical properties of molybdenum (Mo) and chalcogen (S, Se, Te) co-doped anatase TiO2. Two co-doping configurations were examined: Model 1, where the dopants are adjacent, and Model 2, where the dopants are farther apart. The incorporation of Mo into anatase TiO2 resulted in a significant bandgap reduction, lowering it from 3.22 eV (pure TiO2) to range of 2.52–0.68 eV, depending on the specific doping model. The introduction of Mo-4d states below the conduction band led to a shift in the Fermi level from the top of the valence band to the bottom of the conduction band, confirming the n-type doping characteristics of Mo in TiO2. Chalcogen doping introduced isolated electronic states from Te-5p, S-3p, and Se-4p located above the valence band maximum, further reducing the bandgap. Among the examined configurations, Mo–S co-doping in Model 1 exhibited most optimal structural stability structure with the fewer impurity states, enhancing photocatalytic efficiency by reducing charge recombination. With the exception of Mo–Te co-doping, all co-doped systems demonstrated strong oxidation power under visible light, making Mo-S and Mo-Se co-doped TiO2 promising candidates for oxidation-driven photocatalysis. However, their limited reduction ability suggests they may be less suitable for water-splitting applications. The study also revealed that dopant positioning significantly influences charge transfer and optoelectronic properties. Model 1 favored localized electron density and weaker magnetization, while Model 2 exhibited delocalized charge density and stronger magnetization. These findings underscore the critical role of dopant arrangement in optimizing TiO2-based photocatalysts for solar energy applications. Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
Show Figures

Figure 1

23 pages, 309 KiB  
Review
Mathematical Optimization in Machine Learning for Computational Chemistry
by Ana Zekić
Computation 2025, 13(7), 169; https://doi.org/10.3390/computation13070169 - 11 Jul 2025
Viewed by 410
Abstract
Machine learning (ML) is transforming computational chemistry by accelerating molecular simulations, property prediction, and inverse design. Central to this transformation is mathematical optimization, which underpins nearly every stage of model development, from training neural networks and tuning hyperparameters to navigating chemical space for [...] Read more.
Machine learning (ML) is transforming computational chemistry by accelerating molecular simulations, property prediction, and inverse design. Central to this transformation is mathematical optimization, which underpins nearly every stage of model development, from training neural networks and tuning hyperparameters to navigating chemical space for molecular discovery. This review presents a structured overview of optimization techniques used in ML for computational chemistry, including gradient-based methods (e.g., SGD and Adam), probabilistic approaches (e.g., Monte Carlo sampling and Bayesian optimization), and spectral methods. We classify optimization targets into model parameter optimization, hyperparameter selection, and molecular optimization and analyze their application across supervised, unsupervised, and reinforcement learning frameworks. Additionally, we examine key challenges such as data scarcity, limited generalization, and computational cost, outlining how mathematical strategies like active learning, meta-learning, and hybrid physics-informed models can address these issues. By bridging optimization methodology with domain-specific challenges, this review highlights how tailored optimization strategies enhance the accuracy, efficiency, and scalability of ML models in computational chemistry. Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
17 pages, 1543 KiB  
Article
Simultaneous Multi-Objective and Topology Optimization: Effect of Mesh Refinement and Number of Iterations on Computational Cost
by Daniel Miler, Matija Hoić, Rudolf Tomić, Andrej Jokić and Robert Mašović
Computation 2025, 13(7), 168; https://doi.org/10.3390/computation13070168 - 11 Jul 2025
Viewed by 275
Abstract
In this study, a multi-objective optimization procedure with embedded topology optimization was presented. The procedure simultaneously optimizes the spatial arrangement and topology of bodies in a multi-body system. The multi-objective algorithm determines the locations of supports, joints, active loads, reactions, and load magnitudes, [...] Read more.
In this study, a multi-objective optimization procedure with embedded topology optimization was presented. The procedure simultaneously optimizes the spatial arrangement and topology of bodies in a multi-body system. The multi-objective algorithm determines the locations of supports, joints, active loads, reactions, and load magnitudes, which serve as inputs for the topology optimization of each body. The multi-objective algorithm dynamically adjusts domain size, support locations, and load magnitudes during optimization. Due to repeated topology optimization calls within the genetic algorithm, the computational cost is significant. To address this, two reduction strategies are proposed: (I) using a coarser mesh and (II) reducing the number of iterations during the initial generations. As optimization progresses, Strategy I gradually refines the mesh, while Strategy II increases the maximum allowable iteration count. The effectiveness of both strategies is evaluated against a baseline (Reference) without reductions. By the 25th generation, all approaches achieve similar hypervolume values (Reference: 2.181; I: 2.112; II: 2.133). The computation time is substantially reduced (Reference: 42,226 s; I: 16,814 s; II: 21,674 s), demonstrating that both strategies effectively accelerate optimization without compromising solution quality. Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Figure 1

18 pages, 375 KiB  
Article
Useful Results for the Qualitative Analysis of Generalized Hattaf Mixed Fractional Differential Equations with Applications to Medicine
by Khalid Hattaf
Computation 2025, 13(7), 167; https://doi.org/10.3390/computation13070167 - 10 Jul 2025
Viewed by 647
Abstract
Most solutions of fractional differential equations (FDEs) that model real-world phenomena in various fields of science, industry, and engineering are complex and cannot be solved analytically. This paper mainly aims to present some useful results for studying the qualitative properties of solutions of [...] Read more.
Most solutions of fractional differential equations (FDEs) that model real-world phenomena in various fields of science, industry, and engineering are complex and cannot be solved analytically. This paper mainly aims to present some useful results for studying the qualitative properties of solutions of FDEs involving the new generalized Hattaf mixed (GHM) fractional derivative, which encompasses many types of fractional operators with both singular and non-singular kernels. In addition, this study also aims to unify and generalize existing results under a broader operator. Furthermore, the obtained results are applied to some linear systems arising from medicine. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

11 pages, 241 KiB  
Article
Some Secret Sharing Based on Hyperplanes
by Guohui Wang and Yucheng Chen
Computation 2025, 13(7), 166; https://doi.org/10.3390/computation13070166 - 10 Jul 2025
Viewed by 199
Abstract
The secret sharing schemes (SSS) are widely used in secure multi-party computing and distributed computing, and the access structure is the key to constructing secret sharing schemes. In this paper, we propose a method for constructing access structures based on hyperplane combinatorial structures [...] Read more.
The secret sharing schemes (SSS) are widely used in secure multi-party computing and distributed computing, and the access structure is the key to constructing secret sharing schemes. In this paper, we propose a method for constructing access structures based on hyperplane combinatorial structures over finite fields. According to the given access structure, the corresponding secret sharing scheme that can identify cheaters is given. This scheme enables the secret to be correctly restored if the cheater does not exceed the threshold, and the cheating behavior can be detected and located. Full article
17 pages, 1348 KiB  
Article
A Segmented Linear Regression Study of Seasonal Profiles of COVID-19 Deaths in Italy: September 2021–September 2024
by Marco Roccetti and Eugenio Maria De Rosa
Computation 2025, 13(7), 165; https://doi.org/10.3390/computation13070165 - 9 Jul 2025
Viewed by 245
Abstract
Using a segmented linear regression model, we examined the seasonal profiles of weekly COVID-19 deaths data in Italy over a three-year-long period during which the SARS-CoV-2 Omicron and post-Omicron variants were predominant (September 2021–September 2024). Comparing the slopes of the regression segments, we [...] Read more.
Using a segmented linear regression model, we examined the seasonal profiles of weekly COVID-19 deaths data in Italy over a three-year-long period during which the SARS-CoV-2 Omicron and post-Omicron variants were predominant (September 2021–September 2024). Comparing the slopes of the regression segments, we were able to discuss the variation in steepness of the Italian COVID-19 mortality trend, identifying the corresponding growth/decline profile for each considered season. Our findings show that, although the COVID-19 weekly death mortality has been in a declining trend in Italy since the end of 2021 until the end of 2024, there have been increasing alterations in the COVID-19 deaths for all winters and summers of that period. These increasing mortality variations were more pronounced in winters than in summers, with an average progressive increase in the number of COVID-19 deaths, with each new week, of 55.75 and 22.90, in winters and in summers, respectively. We found that COVID-19 deaths were, instead, less frequent in the intermediate periods between winters and summers, with an average decrease of −38.01 COVID-19 deaths for each new week. Our segmented regression model has fitted well the observed COVID-19 deaths, as confirmed by the average value of the determination coefficients: 0.74, 0.63 and 0.70, respectively, for winters, summers and intermediate periods. In conclusion, favored by a general declining COVID-19 mortality trend in Italy in the period of interest, transient rises of the mortality have occurred both in winters and in summers, but received little attention because they have always been compensated by consistent downward drifts occurring during the intermediate periods between winters and summers. Full article
Show Figures

Figure 1

39 pages, 3476 KiB  
Article
Lattice Boltzmann Framework for Multiphase Flows by Eulerian–Eulerian Navier–Stokes Equations
by Matteo Maria Piredda and Pietro Asinari
Computation 2025, 13(7), 164; https://doi.org/10.3390/computation13070164 - 9 Jul 2025
Viewed by 201
Abstract
Although the lattice Boltzmann method (LBM) is relatively straightforward, it demands a well-crafted framework to handle the complex partial differential equations involved in multiphase flow simulations. For the first time to our knowledge, this work proposes a novel LBM framework to solve Eulerian–Eulerian [...] Read more.
Although the lattice Boltzmann method (LBM) is relatively straightforward, it demands a well-crafted framework to handle the complex partial differential equations involved in multiphase flow simulations. For the first time to our knowledge, this work proposes a novel LBM framework to solve Eulerian–Eulerian multiphase flow equations without any finite difference correction, including very-large-density ratios and also a realistic relation for the drag coefficient. The proposed methodology and all reported LBM formulas can be applied to any dimension. This opens a promising venue for simulating multiphase flows in large High Performance Computing (HPC) facilities and on novel parallel hardware. This LBM framework consists of six coupled LBM schemes—running on the same lattice—ensuring an efficient implementation in large codes with minimum effort. The preliminary numeral results agree in an excellent way with the reference numerical solution obtained by a traditional finite difference solver. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

8 pages, 580 KiB  
Article
Numerical Simulation of Cytokinesis Hydrodynamics
by Andriy A. Avramenko, Igor V. Shevchuk, Andrii I. Tyrinov and Iryna V. Dzevulska
Computation 2025, 13(7), 163; https://doi.org/10.3390/computation13070163 - 8 Jul 2025
Viewed by 220
Abstract
A hydrodynamic homogeneous model has been developed for the motion of mutually impenetrable viscoelastic non-Newtonian fluids taking into account surface tension forces. Based on this model, numerical simulations of cytokinesis hydrodynamics were performed. The cytoplasm is considered a non-Newtonian viscoelastic fluid. The model [...] Read more.
A hydrodynamic homogeneous model has been developed for the motion of mutually impenetrable viscoelastic non-Newtonian fluids taking into account surface tension forces. Based on this model, numerical simulations of cytokinesis hydrodynamics were performed. The cytoplasm is considered a non-Newtonian viscoelastic fluid. The model allows for the calculation of the formation and rupture of the intercellular bridge. Results from an analytical analysis shed light on the influence of the viscoelastic fluid’s relaxation time on cytokinesis dynamics. A comparison of numerical simulation results and experimental data showed satisfactory agreement. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

20 pages, 2119 KiB  
Article
Robust Trajectory Tracking Fault-Tolerant Control for Quadrotor UAVs Based on Adaptive Sliding Mode and Fault Estimation
by Yukai Wu, Guobi Ling and Yaoke Shi
Computation 2025, 13(7), 162; https://doi.org/10.3390/computation13070162 - 7 Jul 2025
Viewed by 254
Abstract
This paper presents a composite disturbance-tolerant control framework for quadrotor unmanned aerial vehicles (UAVs). By constructing an enhanced dynamic model that incorporates parameter uncertainties, external disturbances, and actuator faults and considering the inherent underactuated and highly coupled characteristics of the UAV, a novel [...] Read more.
This paper presents a composite disturbance-tolerant control framework for quadrotor unmanned aerial vehicles (UAVs). By constructing an enhanced dynamic model that incorporates parameter uncertainties, external disturbances, and actuator faults and considering the inherent underactuated and highly coupled characteristics of the UAV, a novel robust adaptive sliding mode controller (RASMC) is designed. The controller adopts a hierarchical adaptive mechanism and utilizes a dual-loop composite adaptive law to achieve the online estimation of system parameters and fault information. Using the Lyapunov method, the asymptotic stability of the closed-loop system is rigorously proven. Simulation results demonstrate that, under the combined effects of external disturbances and actuator faults, the RASMC effectively suppresses position errors (<0.05 m) and attitude errors (<0.02 radians), significantly outperforming traditional ADRC and LQR control methods. Further analysis shows that the proposed adaptive law enables the precise online estimation of aerodynamic coefficients and disturbance boundaries during actual flights, with estimation errors controlled within ±10%. Moreover, compared to ADRC and LQR, RASMC reduces the settling time by more than 50% and the tracking overshoot by over 70% while using the (tanh(·)) approximation to eliminate chattering. Prototype experiments validate the fact that the method achieves centimeter-level trajectory tracking under real uncertainties, demonstrating the superior performance and robustness of the control framework in complex flight missions. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

14 pages, 1246 KiB  
Article
POTMEC: A Novel Power Optimization Technique for Mobile Edge Computing Networks
by Tamilarasan Ananth Kumar, Rajendirane Rajmohan, Sunday Adeola Ajagbe, Oluwatobi Akinlade and Matthew Olusegun Adigun
Computation 2025, 13(7), 161; https://doi.org/10.3390/computation13070161 - 7 Jul 2025
Viewed by 322
Abstract
The rapid growth of ultra-dense mobile edge computing (UDEC) in 5G IoT networks has intensified energy inefficiencies and latency bottlenecks exacerbated by dynamic channel conditions and imperfect CSI in real-world deployments. This paper introduces POTMEC, a power optimization framework that combines a channel-aware [...] Read more.
The rapid growth of ultra-dense mobile edge computing (UDEC) in 5G IoT networks has intensified energy inefficiencies and latency bottlenecks exacerbated by dynamic channel conditions and imperfect CSI in real-world deployments. This paper introduces POTMEC, a power optimization framework that combines a channel-aware adaptive power allocator using real-time SNR measurements, a MATLAB-trained RL model for joint offloading decisions and a decaying step-size algorithm guaranteeing convergence. Computational offloading is a productive technique to overcome mobile battery life issues by processing a few parts of the mobile application on the cloud. It investigated how multi-access edge computing can reduce latency and energy usage. The experiments demonstrate that the proposed model reduces transmission energy consumption by 27.5% compared to baseline methods while maintaining the latency below 15 ms in ultra-dense scenarios. The simulation results confirm a 92% accuracy in near-optimal offloading decisions under dynamic channel conditions. This work advances sustainable edge computing by enabling energy-efficient IoT deployments in 5G ultra-dense networks without compromising QoS. Full article
Show Figures

Figure 1

15 pages, 755 KiB  
Article
Successful Management of Public Health Projects Driven by AI in a BANI Environment
by Sergiy Bushuyev, Natalia Bushuyeva, Ivan Nekrasov and Igor Chumachenko
Computation 2025, 13(7), 160; https://doi.org/10.3390/computation13070160 - 4 Jul 2025
Viewed by 373
Abstract
The management of public health projects in a BANI (brittle, anxious, non-linear, incomprehensible) environment, exemplified by the ongoing war in Ukraine, presents unprecedented challenges due to fragile systems, heightened uncertainty, and complex socio-political dynamics. This study proposes an AI-driven framework to enhance the [...] Read more.
The management of public health projects in a BANI (brittle, anxious, non-linear, incomprehensible) environment, exemplified by the ongoing war in Ukraine, presents unprecedented challenges due to fragile systems, heightened uncertainty, and complex socio-political dynamics. This study proposes an AI-driven framework to enhance the resilience and effectiveness of public health interventions under such conditions. By integrating a coupled SEIR–Infodemic–Panicdemic Model with war-specific factors, we simulate the interplay of infectious disease spread, misinformation dissemination, and panic dynamics over 1500 days in a Ukrainian city (Kharkiv). The model incorporates time-varying parameters to account for population displacement, healthcare disruptions, and periodic war events, reflecting the evolving conflict context. Sensitivity and risk–opportunity analyses reveal that disease transmission, misinformation, and infrastructure damage significantly exacerbate epidemic peaks, while AI-enabled interventions, such as fact-checking, mental health support, and infrastructure recovery, offer substantial mitigation potential. Qualitative assessments identify technical, organisational, ethical, regulatory, and military risks, alongside opportunities for predictive analytics, automation, and equitable healthcare access. Quantitative simulations demonstrate that risks, like increased displacement, can amplify infectious peaks by up to 28.3%, whereas opportunities, like enhanced fact-checking, can reduce misinformation by 18.2%. These findings provide a roadmap for leveraging AI to navigate BANI environments, offering actionable insights for public health practitioners in Ukraine and other crisis settings. The study underscores AI’s transformative role in fostering adaptive, data-driven strategies to achieve sustainable health outcomes amidst volatility and uncertainty. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

36 pages, 15335 KiB  
Article
An Application of Deep Learning Models for the Detection of Cocoa Pods at Different Ripening Stages: An Approach with Faster R-CNN and Mask R-CNN
by Juan Felipe Restrepo-Arias, María José Montoya-Castaño, María Fernanda Moreno-De La Espriella and John W. Branch-Bedoya
Computation 2025, 13(7), 159; https://doi.org/10.3390/computation13070159 - 2 Jul 2025
Viewed by 635
Abstract
The accurate classification of cocoa pod ripeness is critical for optimizing harvest timing, improving post-harvest processing, and ensuring consistent quality in chocolate production. Traditional ripeness assessment methods are often subjective, labor-intensive, or destructive, highlighting the need for automated, non-invasive solutions. This study evaluates [...] Read more.
The accurate classification of cocoa pod ripeness is critical for optimizing harvest timing, improving post-harvest processing, and ensuring consistent quality in chocolate production. Traditional ripeness assessment methods are often subjective, labor-intensive, or destructive, highlighting the need for automated, non-invasive solutions. This study evaluates the performance of R-CNN-based deep learning models—Faster R-CNN and Mask R-CNN—for the detection and segmentation of cocoa pods across four ripening stages (0–2 months, 2–4 months, 4–6 months, and >6 months) using the RipSetCocoaCNCH12 dataset, which is publicly accessible, comprising 4116 labeled images collected under real-world field conditions, in the context of precision agriculture. Initial experiments using pretrained weights and standard configurations on a custom COCO-format dataset yielded promising baseline results. Faster R-CNN achieved a mean average precision (mAP) of 64.15%, while Mask R-CNN reached 60.81%, with the highest per-class precision in mature pods (C4) but weaker detection in early stages (C1). To improve model robustness, the dataset was subsequently augmented and balanced, followed by targeted hyperparameter optimization for both architectures. The refined models were then benchmarked against state-of-the-art YOLOv8 networks (YOLOv8x and YOLOv8l-seg). Results showed that YOLOv8x achieved the highest mAP of 86.36%, outperforming YOLOv8l-seg (83.85%), Mask R-CNN (73.20%), and Faster R-CNN (67.75%) in overall detection accuracy. However, the R-CNN models offered valuable instance-level segmentation insights, particularly in complex backgrounds. Furthermore, a qualitative evaluation using confidence heatmaps and error analysis revealed that R-CNN architectures occasionally missed small or partially occluded pods. These findings highlight the complementary strengths of region-based and real-time detectors in precision agriculture and emphasize the need for class-specific enhancements and interpretability tools in real-world deployments. Full article
Show Figures

Figure 1

17 pages, 2101 KiB  
Article
Enhancing DDoS Attacks Mitigation Using Machine Learning and Blockchain-Based Mobile Edge Computing in IoT
by Mahmoud Chaira, Abdelkader Belhenniche and Roman Chertovskih
Computation 2025, 13(7), 158; https://doi.org/10.3390/computation13070158 - 1 Jul 2025
Viewed by 404
Abstract
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. [...] Read more.
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. In this paper, we present a novel approach to DDoS attack detection and mitigation that integrates state-of-the-art machine learning techniques with Blockchain-based Mobile Edge Computing (MEC) in IoT environments. Our solution leverages the decentralized and tamper-resistant nature of Blockchain technology to enable secure and efficient data collection and processing at the network edge. We evaluate multiple machine learning models, including K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Transformer architectures, and LightGBM, using the CICDDoS2019 dataset. Our results demonstrate that Transformer models achieve a superior detection accuracy of 99.78%, while RF follows closely with 99.62%, and LightGBM offers optimal efficiency for real-time detection. This integrated approach significantly enhances detection accuracy and mitigation effectiveness compared to existing methods, providing a robust and adaptive mechanism for identifying and mitigating malicious traffic patterns in IoT environments. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

20 pages, 1369 KiB  
Article
Numerical Modeling of Electromagnetic Modes in a Planar Stratified Medium with a Graphene Interface
by Eugen Smolkin
Computation 2025, 13(7), 157; https://doi.org/10.3390/computation13070157 - 1 Jul 2025
Viewed by 195
Abstract
Graphene interfaces in layered dielectrics can support unique electromagnetic modes, but analyzing these modes requires robust computational techniques. This work presents a numerical method for computing TE-polarized eigenmodes in a planar stratified dielectric slab with an infinitesimally thin graphene sheet at its interface. [...] Read more.
Graphene interfaces in layered dielectrics can support unique electromagnetic modes, but analyzing these modes requires robust computational techniques. This work presents a numerical method for computing TE-polarized eigenmodes in a planar stratified dielectric slab with an infinitesimally thin graphene sheet at its interface. The governing boundary-value problem is reformulated as coupled initial-value problems and solved via a customized shooting method, enabling accurate calculation of complex propagation constants and field profiles despite the discontinuity at the graphene layer. We demonstrate that the graphene significantly alters the modal spectrum, introducing complex leaky and surface waves with attenuation due to graphene’s conductivity. Numerical results illustrate how the layers’ inhomogeneity and the graphene’s surface conductivity influence mode confinement and loss. These findings confirm the robustness of the proposed computational approach and provide insights relevant to the design and analysis of graphene-based waveguiding devices. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

26 pages, 1656 KiB  
Article
Feedback-Based Validation Learning
by Chafik Boulealam, Hajar Filali, Jamal Riffi, Adnane Mohamed Mahraz and Hamid Tairi
Computation 2025, 13(7), 156; https://doi.org/10.3390/computation13070156 - 1 Jul 2025
Viewed by 326
Abstract
This paper presents Feedback-Based Validation Learning (FBVL), a novel approach that transforms the role of validation datasets in deep learning. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training process. It employs real-time feedback [...] Read more.
This paper presents Feedback-Based Validation Learning (FBVL), a novel approach that transforms the role of validation datasets in deep learning. Unlike conventional methods that utilize validation datasets for performance evaluation post-training, FBVL integrates these datasets into the training process. It employs real-time feedback to optimize the model’s weight adjustments, enhancing prediction accuracy and overall model performance. Importantly, FBVL preserves the integrity of the validation process by using prediction outcomes on the validation dataset to guide training adjustments, without directly accessing the dataset. Our empirical study conducted using the Iris dataset demonstrated the effectiveness of FBVL. The Iris dataset, comprising 150 samples from three species of Iris flowers, each characterized by four features, served as an ideal testbed for demonstrating FBVL’s effectiveness. The implementation of FBVL led to substantial performance improvements, surpassing the accuracy of the previous best result by approximately 7.14% and achieving a loss reduction greater than the previous methods by approximately 49.18%. When FBVL was applied to the Multimodal EmotionLines Dataset (MELD), it showcased its wide applicability across various datasets and domains. The model achieved a test-set accuracy of 70.08%, surpassing the previous best-reported accuracy by approximately 3.12%. These remarkable results underscore FBVL’s ability to optimize performance on established datasets and its capacity to minimize loss. Using our FBVL method, we achieved a test set f1_score micro of 70.07%, which is higher than the previous best-reported value for f1_score micro of 67.59%. These results demonstrate that FBVL enhances classification accuracy and model generalization, particularly in scenarios involving small or imbalanced datasets, offering practical benefits for designing more efficient and robust neural network architectures. Full article
Show Figures

Figure 1

15 pages, 5288 KiB  
Article
A Mesoscale Particle Method for Simulation of Boundary Slip Phenomena in Fluid Systems
by Alexander E. Filippov, Mikhail Popov and Valentin L. Popov
Computation 2025, 13(7), 155; https://doi.org/10.3390/computation13070155 - 1 Jul 2025
Viewed by 297
Abstract
The present work aimed to develop a simple simulation tool to support studies of slip and other non-traditional boundary conditions in solid–fluid interactions. A mesoscale particle model (movable automata) was chosen to enable performant simulation of all relevant aspects of the system, including [...] Read more.
The present work aimed to develop a simple simulation tool to support studies of slip and other non-traditional boundary conditions in solid–fluid interactions. A mesoscale particle model (movable automata) was chosen to enable performant simulation of all relevant aspects of the system, including phase changes, plastic deformation and flow, interface phenomena, turbulence, etc. The physical system under study comprised two atomically flat surfaces composed of particles of different sizes and separated by a model fluid formed by moving particles with repulsing cores of different sizes and long-range attraction. The resulting simulation method was tested under a variety of particle densities and conditions. It was shown that the particles can enter different (solid, liquid, and gaseous) states, depending on the effective temperature (kinetic energy caused by surface motion and random noise generated by spatially distributed Langevin sources). The local order parameter and formation of solid domains was studied for systems with varying density. Heating of the region close to one of the plates could change the density of the liquid in its proximity and resulted in chaotization (turbulence); it also dramatically changed the system configuration, the direction of the average flow, and reduced the effective friction force. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

26 pages, 567 KiB  
Article
Algorithmic Analysis of Queuing System with Varying Number of Servers, Phase-Type Service Time Distribution, and Changeable Arrival Process Depending on Random Environment
by Alexander Dudin, Olga Dudina and Sergei Dudin
Computation 2025, 13(7), 154; https://doi.org/10.3390/computation13070154 - 29 Jun 2025
Viewed by 205
Abstract
An MAP/PH/N-type queuing system functioning within a finite-state Markovian random environment is studied. The random environment’s state impacts the number of available servers, the underlying processes of customer arrivals and service, and the impatience rate [...] Read more.
An MAP/PH/N-type queuing system functioning within a finite-state Markovian random environment is studied. The random environment’s state impacts the number of available servers, the underlying processes of customer arrivals and service, and the impatience rate of customers. The impact on the state space of the underlying processes of customer arrivals and of the more general, as compared to exponential, service time distribution defines the novelty of the model. The behavior of the system is described by a multidimensional Markov chain that belongs to the classes of the level-independent quasi-birth-and-death processes or asymptotically quasi-Toeplitz Markov chains, depending on whether or not the customers are absolutely patient in all states of the random environment or are impatient in at least one state of the random environment. Using the tools of the corresponding processes or chains, a stationary analysis of the system is implemented. In particular, it is shown that the system is always ergodic if customers are impatient in at least one state of the random environment. Expressions for the computation of the basic performance measures of the system are presented. Examples of their computation for the system with three states of the random environment are presented as 3-D surfaces. The results can be useful for the analysis of a variety of real-world systems with parameters that may randomly change during system operation. In particular, they can be used for optimally matching the number of active servers and the bandwidth used by the transmission channels to the current rate of arrivals, and vice versa. Full article
Show Figures

Figure 1

18 pages, 2168 KiB  
Article
A New Approach to Topology Optimization with Genetic Algorithm and Parameterization Level Set Function
by Igor Pehnec, Damir Sedlar, Ivo Marinic-Kragic and Damir Vučina
Computation 2025, 13(7), 153; https://doi.org/10.3390/computation13070153 - 26 Jun 2025
Viewed by 451
Abstract
In this paper, a new approach to topology optimization using the parameterized level set function and genetic algorithm optimization methods is presented. The impact of a number of parameters describing the level set function in the representation of the model was examined. Using [...] Read more.
In this paper, a new approach to topology optimization using the parameterized level set function and genetic algorithm optimization methods is presented. The impact of a number of parameters describing the level set function in the representation of the model was examined. Using the B-spline interpolation function, the number of variables describing the level set function was decreased, enabling the application of evolutionary methods (genetic algorithms) in the topology optimization process. The traditional level set method is performed by using the Hamilton–Jacobi transport equation, which implies the use of gradient optimization methods that are prone to becoming stuck in local minima. Furthermore, the resulting optimal shapes are strongly dependent on the initial solution. The proposed topology optimization procedure, written in MATLAB R2013b, utilizes a genetic algorithm for global optimization, enabling it to locate the global optimum efficiently. To assess the acceleration and convergence capabilities of the proposed topology optimization method, a new genetic algorithm penalty operator was tested. This operator addresses the slow convergence issue typically encountered when the genetic algorithm optimization procedure nears a solution. By penalizing similar individuals within a population, the method aims to enhance convergence speed and overall performance. In complex examples (3D), the method can also function as a generator of good initial solutions for faster topology optimization methods (e.g., level set) that rely on such initial solutions. Both the proposed method and the traditional methods have their own advantages and limitations. The main advantage is that the proposed method is a global search method. This makes it robust against entrapment in local minima and independent of the initial solution. It is important to note that this evolutionary approach does not necessarily perform better in terms of convergence speed compared to gradient-based or other local optimization methods. However, once the global optimum has been found using the genetic algorithm, convergence can be accelerated using a faster local method such as gradient-based optimization. The application and usefulness of the method were tested on typical 2D cantilever beams and Michell beams. Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Figure 1

19 pages, 4217 KiB  
Review
Optimization of Rock-Cutting Tools: Improvements in Structural Design and Process Efficiency
by Yuecao Cao, Qiang Zhang, Shucheng Zhang, Ying Tian, Xiangwei Dong, Xiaojun Song and Dongxiang Wang
Computation 2025, 13(7), 152; https://doi.org/10.3390/computation13070152 - 23 Jun 2025
Viewed by 520
Abstract
Rock-breaking cutters are critical components in tunneling, mining, and drilling operations, where efficiency, durability, and energy consumption are paramount. Traditional cutter designs and empirical process optimization methods often fail to address the dynamic interaction between heterogeneous rock masses and tool structures, leading to [...] Read more.
Rock-breaking cutters are critical components in tunneling, mining, and drilling operations, where efficiency, durability, and energy consumption are paramount. Traditional cutter designs and empirical process optimization methods often fail to address the dynamic interaction between heterogeneous rock masses and tool structures, leading to premature wear, high specific energy, and suboptimal performance. Topology optimization, as an advanced computational design method, offers transformative potential for lightweight, high-strength cutter structures and adaptive cutting process control. This review systematically examines recent advancements in topology-optimized cutter design and its integration with rock-cutting mechanics. The structural innovations in cutter geometry and materials are analyzed, emphasizing solutions for stress distribution, wear/fatigue resistance, and dynamic load adaptation. The numerical methods for modeling rock–tool interactions are introduced, including discrete element method (DEM) simulations, smoothed particle hydrodynamics (SPH) methods, and machine learning (ML)-enhanced predictive models. The cutting process optimization strategies that leverage topology optimization to balance objectives such as energy efficiency, chip formation control, and tool lifespan are evaluated. Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop