Previous Issue
Volume 30, April
 
 

Math. Comput. Appl., Volume 30, Issue 3 (June 2025) – 20 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
30 pages, 2368 KiB  
Article
A Hybrid Approach for Reachability Analysis of Complex Software Systems Using Fuzzy Adaptive Particle Swarm Optimization Algorithm and Rule Composition
by Nahid Salimi, Seyfollah Soleimani, Vahid Rafe and Davood Khodadad
Math. Comput. Appl. 2025, 30(3), 65; https://doi.org/10.3390/mca30030065 - 10 Jun 2025
Abstract
Model checking has become a widely used and precise technique for verifying software systems. However, a major challenge in model checking is state space explosion, which occurs due to the exponential memory usage required by the model checker. To address this issue, meta-heuristic [...] Read more.
Model checking has become a widely used and precise technique for verifying software systems. However, a major challenge in model checking is state space explosion, which occurs due to the exponential memory usage required by the model checker. To address this issue, meta-heuristic and evolutionary algorithms offer a promising solution by searching for a state where a property is either satisfied or violated. Recently, various evolutionary algorithms, such as Genetic Algorithms and Particle Swarm Optimization, have been applied to detect deadlock states. While these approaches have been useful, they primarily focus on deadlock detection. This paper proposes a fuzzy algorithm to analyse reachability properties in systems specified through Graph Transformation Systems with large state spaces. To achieve this, the existing Particle Swarm Optimisation algorithm, which is typically used for deadlock detection, has been extended to analyse reachability properties. To further enhance accuracy, a Fuzzy Adaptive Particle Swarm Optimization algorithm is introduced to determine which states and paths should be explored at each step-in order to find the corresponding reachable state. Additionally, the proposed hybrid algorithm was applied to models generated through rule composition to assess the impact of rule composition on execution time and the number of explored states. These approaches were implemented within an open-source toolset called GROOVE, which is used for designing and model checking Graph Transformation Systems. Experimental results demonstrate that proposed hybrid algorithm reduced verification time by up to 49.86% compared to Particle Swarm Optimization and 65.17% compared to Genetic Algorithms in reachability analysis of complex models. Furthermore, it explored 32.7% fewer states on average than the hybrid method based on Particle Swarm Optimization and Gravitational Search Algorithms, and 57.4% fewer states compared to Genetic Algorithms, indicating improved search efficiency. The application of rule composition further reduced execution time by 35.7% and the number of explored states by 41.2% in large-scale models. These results confirm that proposed hybrid algorithm significantly enhances reachability analysis in the systems modelled via Graph Transformation, improving both computational efficiency and scalability. Full article
Show Figures

Figure 1

36 pages, 5967 KiB  
Article
Color Identification on Heterogeneous Bean Landrace Seeds Using Gaussian Mixture Models in CIE L*a*b* Color Space
by Adriana-Laura López-Lobato, Martha-Lorena Avendaño-Garrido, Héctor-Gabriel Acosta-Mesa, José-Luis Morales-Reyes and Elia-Nora Aquino-Bolaños
Math. Comput. Appl. 2025, 30(3), 64; https://doi.org/10.3390/mca30030064 - 6 Jun 2025
Viewed by 96
Abstract
The classification of bean landraces based on their coloration is of particular interest, as the color of these plants is associated with the nutritional components present in their seeds. In this paper, the authors propose a procedure to identify the colors of heterogeneous [...] Read more.
The classification of bean landraces based on their coloration is of particular interest, as the color of these plants is associated with the nutritional components present in their seeds. In this paper, the authors propose a procedure to identify the colors of heterogeneous color bean landraces based on the information from their digital images. The proposed methodology employs a three-dimensional histogram representation of the estimated color, expressed in the CIE L*a*b* color space, with an unsupervised learning method called the Gaussian Mixture Model. This approach facilitates the acquisition of representative information for the colors of a bean landrace, represented as points in the CIE L*a*b* color space. Furthermore, the K-nn method can be trained with these punctual representations to identify colors, yielding satisfactory results on landraces with homogeneous and heterogeneous seeds. Full article
(This article belongs to the Special Issue Feature Papers in Mathematical and Computational Applications 2025)
Show Figures

Figure 1

13 pages, 384 KiB  
Article
On the Study of Wealth Distribution with Non-Maxwellian Collision Kernels and Variable Trading Propensity
by Yaxue Liu, Miao Liu and Shaoyong Lai
Math. Comput. Appl. 2025, 30(3), 63; https://doi.org/10.3390/mca30030063 - 5 Jun 2025
Viewed by 120
Abstract
A class of dynamic equations containing a non-Maxwellian collision kernel is used to investigate the distribution of wealth. A trading rule, in which the trading propensity γ of agents is a function of wealth w (namely, γ=γ(w)), [...] Read more.
A class of dynamic equations containing a non-Maxwellian collision kernel is used to investigate the distribution of wealth. A trading rule, in which the trading propensity γ of agents is a function of wealth w (namely, γ=γ(w)), is considered. Two different trading propensity functions are discussed. One is that γ(w) increases with wealth. The other is that γ(w) decreases with the increase in wealth. In a single transaction, when the transaction tendency increases with the increase in wealth, the rich invest more in transactions. The gap between the rich and the poor in society is reduced under suitable conditions. Through numerical simulation, we conclude that an escalation in market risk intensifies the inequality in wealth distribution. Full article
Show Figures

Figure 1

30 pages, 787 KiB  
Article
A New Logistic Distribution and Its Properties, Applications and PORT-VaR Analysis for Extreme Financial Claims
by Piotr Sulewski, Morad Alizadeh, Jondeep Das, Gholamhossein G. Hamedani, Partha Jyoti Hazarika, Javier E. Contreras-Reyes and Haitham M. Yousof
Math. Comput. Appl. 2025, 30(3), 62; https://doi.org/10.3390/mca30030062 - 4 Jun 2025
Viewed by 307
Abstract
This paper introduces a new extension of exponentiated standard logistic distribution. Some important statistical properties of the novel family of distributions are discussed. A simulation study is also conducted to observe the behavior of the estimated parameter using several estimation methods. The adaptability [...] Read more.
This paper introduces a new extension of exponentiated standard logistic distribution. Some important statistical properties of the novel family of distributions are discussed. A simulation study is also conducted to observe the behavior of the estimated parameter using several estimation methods. The adaptability as well as the flexibility of the new model is checked through two real-life applications. A comprehensive financial risk assessment is conducted using multiple actuarial risk measures: Peaks Over Random Threshold Value-at-Risk, Value-at-Risk, Tail Value-at-Risk, the risk-adjusted return on capital and the Mean of Order P. These indicators offer a nuanced view of risk by capturing different aspects of tail behavior, which are critical in understanding potential extreme losses. These risk indicators are applied to analyze actuarial financial claims data, providing a robust framework for assessing financial stability and decision-making in the face of uncertainty. Full article
(This article belongs to the Section Social Sciences)
Show Figures

Figure 1

5 pages, 166 KiB  
Editorial
Numerical and Evolutionary Optimization 2024
by Marcela Quiroz-Castellanos, Oliver Cuate, Leonardo Trujillo and Oliver Schütze
Math. Comput. Appl. 2025, 30(3), 61; https://doi.org/10.3390/mca30030061 - 1 Jun 2025
Viewed by 207
Abstract
This Special Issue was inspired by the 11th International Workshop on Numerical and Evolutionary Optimization (NEO 2024), held from 3 to 6 September 2024 in Mexico City, Mexico, and hosted by Cinvestav [...] Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2024)
24 pages, 4242 KiB  
Article
Numerical Simulation of Drilling Fluid-Wellbore Interactions in Permeable and Fractured Zones
by Diego A. Vargas Silva, Zuly H. Calderón, Darwin C. Mateus and Gustavo E. Ramírez
Math. Comput. Appl. 2025, 30(3), 60; https://doi.org/10.3390/mca30030060 - 30 May 2025
Viewed by 254
Abstract
In well drilling operations, interactions between drilling fluid water-based and the well-bore present significant challenges, often escalating project costs and timelines. Particularly, fractures (both induced and natural) and permeable zones at the wellbore can result in substantial mud loss or increased filtration. Addressing [...] Read more.
In well drilling operations, interactions between drilling fluid water-based and the well-bore present significant challenges, often escalating project costs and timelines. Particularly, fractures (both induced and natural) and permeable zones at the wellbore can result in substantial mud loss or increased filtration. Addressing these challenges, our research introduces a novel coupled numerical model designed to precisely calculate fluid losses in fractured and permeable zones. For the permeable zone, fundamental variables such as filtration velocity, filtrate concentration variations, permeability reduction, and fluid cake growth are calculated, all based on the law of continuity and convection-dispersion theory. For the fracture zone, the fluid velocity profile is determined using the momentum balance equation and both Newtonian and non-Newtonian rheology. The model was validated against laboratory data and physical models, and adapted for field applications. Our findings emphasize that factors like mud particle size, shear stress, and pressure differential are pivotal. Effectively managing these factors can significantly reduce fluid loss and mitigate formation damage caused by fluid invasion. Furthermore, the understanding gathered from studying mud behavior in both permeable and fractured zones equips drilling personnel with valuable information related to the optimal rheological properties according to field conditions. This knowledge is crucial for optimizing mud formulations and strategies, ultimately aiding in the reduction of non-productive time (NPT) associated with wellbore stability issues. Full article
Show Figures

Figure 1

18 pages, 1166 KiB  
Article
Hybrid Deep Learning Models for Predicting Student Academic Performance
by Kuburat Oyeranti Adefemi, Murimo Bethel Mutanga and Vikash Jugoo
Math. Comput. Appl. 2025, 30(3), 59; https://doi.org/10.3390/mca30030059 - 23 May 2025
Viewed by 439
Abstract
Educational data mining (EDM) is instrumental in the early detection of students at risk of academic underperformance, enabling timely and targeted interventions. Given that many undergraduate students face challenges leading to high failure and dropout rates, utilizing EDM to analyze student data becomes [...] Read more.
Educational data mining (EDM) is instrumental in the early detection of students at risk of academic underperformance, enabling timely and targeted interventions. Given that many undergraduate students face challenges leading to high failure and dropout rates, utilizing EDM to analyze student data becomes crucial. By predicting academic success and identifying at-risk individuals, EDM provides a data-driven approach to enhance student performance. However, accurately predicting student performance is challenging, as it depends on multiple factors, including academic history, behavioral patterns, and health-related metrics. This study aims to bridge this gap by proposing a deep learning model to predict student academic performance with greater accuracy. The approach combines a convolutional neural network (CNN) and a bidirectional gated recurrent unit (BiGRU) network to enhance predictive capabilities. To improve the model’s performance, we address key data preprocessing challenges, including handling missing data, addressing class imbalance, and selecting relevant features. Additionally, we incorporate optimization techniques to fine-tune hyperparameters to determine the best model architecture. Using key performance metrics such as accuracy, precision, recall, and F-score, our experimental results show that our proposed model achieves improved prediction accuracy of 97.48%, 90.90%, and 95.97% across the three datasets. Full article
(This article belongs to the Special Issue New Trends in Computational Intelligence and Applications 2024)
Show Figures

Figure 1

23 pages, 1322 KiB  
Article
Comparative Analysis of ALE Method Implementation in Time Integration Schemes for Pile Penetration Modeling
by Ihab Bendida Bourokba, Abdelmadjid Berga, Patrick Staubach and Nazihe Terfaya
Math. Comput. Appl. 2025, 30(3), 58; https://doi.org/10.3390/mca30030058 - 22 May 2025
Viewed by 261
Abstract
This study investigates the full penetration simulation of piles from the ground surface, focusing on frictional contact modeling without mesh distortion. To overcome issues related to mesh distortion and improve solution convergence, the Arbitrary Lagrangian–Eulerian (ALE) adaptive mesh technique was implemented within both [...] Read more.
This study investigates the full penetration simulation of piles from the ground surface, focusing on frictional contact modeling without mesh distortion. To overcome issues related to mesh distortion and improve solution convergence, the Arbitrary Lagrangian–Eulerian (ALE) adaptive mesh technique was implemented within both explicit and implicit time integration schemes. The numerical model was validated against field experiments conducted at Bothkennar, Scotland, using the Imperial College instrumented displacement pile (ICP) in soft clay, where the soil behavior was effectively represented using the modified Cam-Clay model and the Mohr–Coulomb model. The primary objectives of this study are to evaluate the ALE method performance in handling mesh distortion; analyze the effects of soil–pile interface friction, pile dimensions, and various dilation angles on pile resistance; and compare the effectiveness of explicit and implicit time integration schemes in terms of stability, computational efficiency, and solution accuracy. The ALE method effectively modeled pile penetration in Bothkennar clay, validating the numerical model against field experiments. Comparative analysis revealed the explicit time integration method as more robust and computationally efficient, particularly for complex soil–pile interactions with higher friction coefficients. Full article
(This article belongs to the Topic Numerical Methods for Partial Differential Equations)
Show Figures

Figure 1

18 pages, 1692 KiB  
Article
Multiple-Feature Construction for Image Segmentation Based on Genetic Programming
by David Herrera-Sánchez, José-Antonio Fuentes-Tomás, Héctor-Gabriel Acosta-Mesa, Efrén Mezura-Montes and José-Luis Morales-Reyes
Math. Comput. Appl. 2025, 30(3), 57; https://doi.org/10.3390/mca30030057 - 21 May 2025
Viewed by 160
Abstract
Within the medical field, computer vision has an important role in different tasks, such as health anomaly detection, diagnosis, treatment, and monitoring medical conditions. Image segmentation is one of the most used techniques for medical support to identify regions of interest in different [...] Read more.
Within the medical field, computer vision has an important role in different tasks, such as health anomaly detection, diagnosis, treatment, and monitoring medical conditions. Image segmentation is one of the most used techniques for medical support to identify regions of interest in different organs. However, performing accurate segmentation is difficult due to image variations. In this way, this work proposes an automated multiple-feature construction approach for image segmentation, working with magnetic resonance images, computed tomography, and RGB digital images. Genetic programming is used to automatically create and construct pipelines to extract meaningful features for segmentation tasks. Additionally, a co-evolution strategy is proposed within the evolution process to increase diversity without affecting segmentation performance. The segmentation is addressed as a pixel classification task; in this way, a wrapper approach is used, and the classification model’s segmentation performance determines the fitness. To validate the effectiveness of the proposed method, four datasets were used to measure the capability of the proposal to deal with different types of medical images. The results demonstrate that the proposal achieves values of the DICE similarity coefficient of more than 0.6 in MRI and C.T. images. Additionally, the proposal is compared with SOTA GP-based methods and the convolutional neural networks used within the medical field. The method proposed outperforms these methods, achieving improvements greater than 20% in DICE, specificity, and sensitivity. Additionally, the qualitative results demonstrate that the proposal accurately identifies the region of interest. Full article
(This article belongs to the Special Issue Feature Papers in Mathematical and Computational Applications 2025)
Show Figures

Figure 1

16 pages, 40466 KiB  
Article
Hybrid Neural Network Approach with Physical Constraints for Predicting the Potential Occupancy Set of Surrounding Vehicles
by Bin Sun, Shichun Yang, Jiayi Lu, Yu Wang, Xinjie Feng and Yaoguang Cao
Math. Comput. Appl. 2025, 30(3), 56; https://doi.org/10.3390/mca30030056 - 15 May 2025
Viewed by 284
Abstract
The reliable and uncertainty-aware prediction of surrounding vehicles remains a key challenge in autonomous driving. However, existing methods often struggle to quantify and incorporate uncertainty effectively. To address these challenges, we propose a hybrid architecture that combines a data-driven neural trajectory predictor with [...] Read more.
The reliable and uncertainty-aware prediction of surrounding vehicles remains a key challenge in autonomous driving. However, existing methods often struggle to quantify and incorporate uncertainty effectively. To address these challenges, we propose a hybrid architecture that combines a data-driven neural trajectory predictor with physically grounded constraints to forecast future vehicle occupancy. Specifically, the physical constraints are derived from vehicle kinematic principles and embedded into the network as additional loss terms during training. This integration ensures that predicted trajectories conform to feasible and physically realistic motion boundaries. Furthermore, a mixture density network (MDN) is employed to estimate predictive uncertainty, transforming deterministic trajectory predictions into spatial probability distributions. This enables a probabilistic occupancy representation, offering a richer and more informative description of the potential future positions of surrounding vehicles. The proposed model is trained and evaluated on the Aerial Dataset for China’s Congested Highways and Expressways (AD4CHE), which contains representative driving scenarios in China. Experimental results demonstrate that the model achieves strong fitting performance while maintaining high physical plausibility in its predictions. Full article
Show Figures

Figure 1

28 pages, 7329 KiB  
Article
Causal Diagnosability Optimization Design for UAVs Based on Maximum Mean Covariance Difference and the Gray Wolf Optimization Algorithm
by Xuping Gu and Xianjun Shi
Math. Comput. Appl. 2025, 30(3), 55; https://doi.org/10.3390/mca30030055 - 14 May 2025
Viewed by 206
Abstract
Given the growing complexity and variability of application scenarios, coupled with increasing operational demands, unmanned aerial vehicles (UAVs) are prone to faults. To enhance diagnosability and reliability in this context, this study proposes a causal diagnosability optimization strategy based on the Maximum Mean [...] Read more.
Given the growing complexity and variability of application scenarios, coupled with increasing operational demands, unmanned aerial vehicles (UAVs) are prone to faults. To enhance diagnosability and reliability in this context, this study proposes a causal diagnosability optimization strategy based on the Maximum Mean and Covariance Discrepancy (MMCD) metric and the Grey Wolf Optimization (GWO) algorithm. First, a qualitative assessment method for causal diagnosability is introduced, leveraging structural analysis to evaluate the detectability and isolability of faults. Next, residuals are generated using Minimal Structurally Overdetermined (MSO) sets, and a quantitative diagnosability assessment framework is developed based on the MMCD metric. This framework measures the complexity of diagnosability through the analysis of residual deviations under fault conditions. Finally, a diagnosability optimization technique utilizing the GWO algorithm is proposed. This approach minimizes diagnostic system design costs while maximizing its performance. Simulation results for a UAV structural model demonstrate that the proposed strategy achieves a 100% fault detection rate and fault isolation rate while reducing design costs by 70.59%. Full article
(This article belongs to the Special Issue Applied Optimization in Automatic Control and Systems Engineering)
Show Figures

Figure 1

27 pages, 6617 KiB  
Article
Penalty Strategies in Semiparametric Regression Models
by Ayuba Jack Alhassan, S. Ejaz Ahmed, Dursun Aydin and Ersin Yilmaz
Math. Comput. Appl. 2025, 30(3), 54; https://doi.org/10.3390/mca30030054 - 12 May 2025
Viewed by 236
Abstract
This study includes a comprehensive evaluation of six penalty estimation strategies for partially linear models (PLRMs), focusing on their performance in the presence of multicollinearity and their ability to handle both parametric and nonparametric components. The methods under consideration include Ridge regression, Lasso, [...] Read more.
This study includes a comprehensive evaluation of six penalty estimation strategies for partially linear models (PLRMs), focusing on their performance in the presence of multicollinearity and their ability to handle both parametric and nonparametric components. The methods under consideration include Ridge regression, Lasso, Adaptive Lasso (aLasso), smoothly clipped absolute deviation (SCAD), ElasticNet, and minimax concave penalty (MCP). In addition to these established methods, we also incorporate Stein-type shrinkage estimation techniques that are standard and positive shrinkage and assess their effectiveness in this context. To estimate the PLRMs, we consider a kernel smoothing technique grounded in penalized least squares. Our investigation involves a theoretical analysis of the estimators’ asymptotic properties and a detailed simulation study designed to compare their performance under a variety of conditions, including different sample sizes, numbers of predictors, and levels of multicollinearity. The simulation results reveal that aLasso and shrinkage estimators, particularly the positive shrinkage estimator, consistently outperform the other methods in terms of Mean Squared Error (MSE) relative efficiencies (RE), especially when the sample size is small and multicollinearity is high. Furthermore, we present a real data analysis using the Hitters dataset to demonstrate the applicability of these methods in a practical setting. The results of the real data analysis align with the simulation findings, highlighting the superior predictive accuracy of aLasso and the shrinkage estimators in the presence of multicollinearity. The findings of this study offer valuable insights into the strengths and limitations of these penalty and shrinkage strategies, guiding their application in future research and practice involving semiparametric regression. Full article
Show Figures

Figure 1

35 pages, 3870 KiB  
Article
Induction of Convolutional Decision Trees for Semantic Segmentation of Color Images Using Differential Evolution and Time and Memory Reduction Techniques
by Adriana-Laura López-Lobato, Héctor-Gabriel Acosta-Mesa and Efrén Mezura-Montes
Math. Comput. Appl. 2025, 30(3), 53; https://doi.org/10.3390/mca30030053 - 10 May 2025
Viewed by 243
Abstract
Convolutional Decision Trees (CDTs) are machine learning models utilized as interpretable methods for image segmentation. Their graphical structure enables a relatively simple interpretation of how the tree successively divides the image pixels into two classes, distinguishing between objects of interest and the image [...] Read more.
Convolutional Decision Trees (CDTs) are machine learning models utilized as interpretable methods for image segmentation. Their graphical structure enables a relatively simple interpretation of how the tree successively divides the image pixels into two classes, distinguishing between objects of interest and the image background. Several techniques have been proposed to induce CDTs. However, they have primarily been focused on analyzing grayscale images due to the computational cost of the Differential Evolution (DE) algorithm, which is employed in these techniques. This paper proposes a generalization of the induction process of a CDT with the DE algorithm using color images, implementing two techniques to reduce the computational time and memory employed in the induction process: the median selection technique and a memory of previously evaluated solutions. The first technique is applied to select a representative sample of pixels from an image for the model’s training process, and the second technique is implemented to reduce the number of evaluations in the fitness function considered in the DE process. The efficacy of these techniques was evaluated using the Weizmann Horse and DRIVE datasets, resulting in favorable outcomes in terms of the segmentation performance of the induced CDTs, and the processing time and memory required for the induction process. Full article
(This article belongs to the Special Issue Feature Papers in Mathematical and Computational Applications 2025)
Show Figures

Figure 1

15 pages, 296 KiB  
Article
New Results on Gevrey Well Posedness for the Schrödinger–Korteweg–De Vries System
by Feriel Boudersa, Abdelaziz Mennouni and Ravi P. Agarwal
Math. Comput. Appl. 2025, 30(3), 52; https://doi.org/10.3390/mca30030052 - 7 May 2025
Viewed by 241
Abstract
In this work, we prove that the initial value problem for the Schrödinger–Korteweg–de Vries (SKdV) system is locally well posed in Gevrey spaces for s>34 and k0. This advancement extends recent findings regarding the well posedness [...] Read more.
In this work, we prove that the initial value problem for the Schrödinger–Korteweg–de Vries (SKdV) system is locally well posed in Gevrey spaces for s>34 and k0. This advancement extends recent findings regarding the well posedness of this model within Sobolev spaces and investigates the regularity properties of its solutions. Full article
13 pages, 210 KiB  
Editorial
Recent Advances and New Challenges in Coupled Systems and Networks: Theory, Modelling, and Applications (Special Issue in Honor of Professor Roderick Melnik)
by Sundeep Singh and Weizhong Dai
Math. Comput. Appl. 2025, 30(3), 51; https://doi.org/10.3390/mca30030051 - 7 May 2025
Viewed by 242
Abstract
Coupled systems and networks are ubiquitous across all branches of science and engineering, while mathematical and computational models play a fundamental role in their studies [...] Full article
20 pages, 350 KiB  
Article
A Family of Newton and Quasi-Newton Methods for Power Flow Analysis in Bipolar Direct Current Networks with Constant Power Loads
by Oscar Danilo Montoya, Juan Diego Pulgarín Rivera, Luis Fernando Grisales-Noreña, Walter Gil-González and Fabio Andrade-Rengifo
Math. Comput. Appl. 2025, 30(3), 50; https://doi.org/10.3390/mca30030050 - 6 May 2025
Viewed by 321
Abstract
This paper presents a comprehensive study on the formulation and solution of the power flow problem in bipolar direct current (DC) distribution networks with unbalanced constant power loads. Using the nodal voltage method, a unified nonlinear model is proposed which accurately captures both [...] Read more.
This paper presents a comprehensive study on the formulation and solution of the power flow problem in bipolar direct current (DC) distribution networks with unbalanced constant power loads. Using the nodal voltage method, a unified nonlinear model is proposed which accurately captures both monopolar and bipolar load configurations as well as the voltage coupling between conductors. The model assumes a solid grounding of the neutral conductor and known system parameters, ensuring reproducibility and physical consistency. Seven iterative algorithms are developed and compared, including three Newton–Raphson-based formulations and four quasi-Newton methods with constant Jacobian approximations. The proposed techniques are validated on two benchmark networks comprising 21 and 85 buses. Numerical results demonstrate that Newton-based methods exhibit quadratic convergence and high accuracy, while quasi-Newton approaches significantly reduce computational time, making them more suitable for large-scale systems. The findings highlight the trade-offs between convergence speed and computational efficiency, and they provide valuable insights for the planning and operation of modern bipolar DC grids. Full article
(This article belongs to the Special Issue Applied Optimization in Automatic Control and Systems Engineering)
Show Figures

Figure 1

22 pages, 3576 KiB  
Article
A Deep Learning Approach to Unveil Types of Mental Illness by Analyzing Social Media Posts
by Rajashree Dash, Spandan Udgata, Rupesh K. Mohapatra, Vishanka Dash and Ashrita Das
Math. Comput. Appl. 2025, 30(3), 49; https://doi.org/10.3390/mca30030049 - 3 May 2025
Viewed by 447
Abstract
Mental illness has emerged as a widespread global health concern, often unnoticed and unspoken. In this era of digitization, social media has provided a prominent space for people to express their feelings and find solutions faster. Thus, this area of study with a [...] Read more.
Mental illness has emerged as a widespread global health concern, often unnoticed and unspoken. In this era of digitization, social media has provided a prominent space for people to express their feelings and find solutions faster. Thus, this area of study with a sheer amount of information, which refers to users’ behavioral attributes combined with the power of machine learning (ML), can be explored to make the entire diagnosis process smooth. In this study, an efficient ML model using Long Short-Term Memory (LSTM) is developed to determine the kind of mental illness a user may have using a random text made by the user on their social media. This study is based on natural language processing, where the prerequisites involve data collection from different social media sites and then pre-processing the collected data as per the requirements through stemming, lemmatization, stop word removal, etc. After examining the linguistic patterns of different social media posts, a reduced feature space is generated using appropriate feature engineering, which is further fed as input to the LSTM model to identify a type of mental illness. The performance of the proposed model is also compared with three other ML models, which includes using the full feature space and the reduced one. The optimal resulting model is selected by training and testing all of the models on the publicly available Reddit Mental Health Dataset. Overall, utilizing deep learning (DL) for mental health analysis can offer a promising avenue toward improved interventions, outcomes, and a better understanding of mental health issues at both the individual and population levels, aiding in decision-making processes. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

31 pages, 4470 KiB  
Article
RHADaMAnTe: An Astro Code to Estimate the Spectral Energy Distribution of a Curved Wall of a Gap Opened by a Forming Planet in a Protoplanetary Disk
by Francisco Rendón
Math. Comput. Appl. 2025, 30(3), 48; https://doi.org/10.3390/mca30030048 - 30 Apr 2025
Viewed by 313
Abstract
When a star is born, a protoplanetary disk made of gas and dust surrounds the star. The disk can show gaps opened by different astrophysical mechanisms. The gap has a wall emitting radiation, which contributes to the spectral energy distribution (SED) of the [...] Read more.
When a star is born, a protoplanetary disk made of gas and dust surrounds the star. The disk can show gaps opened by different astrophysical mechanisms. The gap has a wall emitting radiation, which contributes to the spectral energy distribution (SED) of the whole system (star, disk and planet) in the IR band. As these newborn stars are far away from us, it is difficult to know whether the gap is opened by a forming planet. I have developed RHADaMAnTe, a computational astro code based on the geometry of the wall of a gap coming from hydrodynamics 3D simulations of protoplanetary disks. With this code, it is possible to make models of disks to estimate the synthetic SEDs of the wall and prove whether the gap was opened by a forming planet. An implementation of this code was used to study the stellar system LkCa 15. It was found that a planet of 10 Jupiter masses is capable of opening a gap with a curved wall with a height of 12.9 AU. However, the synthetic SED does not fit to Spitzer IRS SED (χ2∼4.5) from 5μm to 35μm. This implies that there is an optically thin region inside the gap. Full article
Show Figures

Graphical abstract

18 pages, 4321 KiB  
Article
Integrating Equation Coding with Residual Networks for Efficient ODE Approximation in Biological Research
by Ziyue Yi
Math. Comput. Appl. 2025, 30(3), 47; https://doi.org/10.3390/mca30030047 - 27 Apr 2025
Viewed by 431
Abstract
Biological research traditionally relies on experimental methods, which can be inefficient and hinder knowledge transfer due to redundant trial-and-error processes and difficulties in standardizing results. The complexity of biological systems, combined with large volumes of data, necessitates precise mathematical models like ordinary differential [...] Read more.
Biological research traditionally relies on experimental methods, which can be inefficient and hinder knowledge transfer due to redundant trial-and-error processes and difficulties in standardizing results. The complexity of biological systems, combined with large volumes of data, necessitates precise mathematical models like ordinary differential equations (ODEs) to describe interactions within these systems. However, the practical use of ODE-based models is limited by the need for curated data, making them less accessible for routine research. To overcome these challenges, we introduce LazyNet, a novel machine learning model that integrates logarithmic and exponential functions within a Residual Network (ResNet) to approximate ODEs. LazyNet reduces the complexity of mathematical operations, enabling faster model training with fewer data and lower computational costs. We evaluate LazyNet across several biological applications, including HIV dynamics, gene regulatory networks, and mass spectrometry analysis of small molecules. Our findings show that LazyNet effectively predicts complex biological phenomena, accelerating model development while reducing the need for extensive experimental data. This approach offers a promising advancement in computational biology, enhancing the efficiency and accuracy of biological research. Full article
Show Figures

Figure 1

16 pages, 2341 KiB  
Article
TAE Predict: An Ensemble Methodology for Multivariate Time Series Forecasting of Climate Variables in the Context of Climate Change
by Juan Frausto Solís, Erick Estrada-Patiño, Mirna Ponce Flores, Juan Paulo Sánchez-Hernández, Guadalupe Castilla-Valdez and Javier González-Barbosa
Math. Comput. Appl. 2025, 30(3), 46; https://doi.org/10.3390/mca30030046 - 25 Apr 2025
Viewed by 400
Abstract
Climate change presents significant challenges due to the increasing frequency and intensity of extreme weather events. Mexico, with its diverse climate and geographic position, is particularly vulnerable, underscoring the need for robust strategies to predict atmospheric variables. This work presents TAE Predict (Time [...] Read more.
Climate change presents significant challenges due to the increasing frequency and intensity of extreme weather events. Mexico, with its diverse climate and geographic position, is particularly vulnerable, underscoring the need for robust strategies to predict atmospheric variables. This work presents TAE Predict (Time series Analysis and Ensemble-based Prediction with relevant feature selection) based on relevant feature selection and ensemble models of machine learning. Dimensionality in multivariate time series is reduced through Principal Component Analysis, ensuring interpretability and efficiency. Additionally, data remediation techniques improve data set quality. The ensemble combines Long Short-Term Memory neural networks, Random Forest regression, and Support Vector Machines, optimizing their contributions using heuristic algorithms such as Particle Swarm Optimization. Experimental results from meteorological time series in key Mexican cities demonstrate that the proposed strategy outperforms individual models in accuracy and robustness. This methodology provides a replicable framework for climate variable forecasting, delivering analytical tools that support decision-making in critical sectors, such as agriculture and water resource management. The findings highlight the potential of integrating modern techniques to address complex, high-dimensional problems. By combining advanced prediction models and feature selection strategies, this study advances the reliability of climate forecasts and contributes to the development of effective adaptation and mitigation measures in response to climate change challenges. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2024)
Show Figures

Figure 1

Previous Issue
Back to TopTop