Previous Issue
Volume 13, April-2
 
 

Mathematics, Volume 13, Issue 9 (May-1 2025) – 109 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 1265 KiB  
Article
Harmony Search Algorithm with Two Problem-Specific Operators for Solving Nonogram Puzzle
by Geonhee Lee and Zong Woo Geem
Mathematics 2025, 13(9), 1470; https://doi.org/10.3390/math13091470 (registering DOI) - 29 Apr 2025
Abstract
The nonogram is a logic puzzle where each cell should be colored or left blank according to row and column clues to reveal a hidden picture. This puzzle is known as an NP-complete combinatorial problem characterized by an exponential increase in the number [...] Read more.
The nonogram is a logic puzzle where each cell should be colored or left blank according to row and column clues to reveal a hidden picture. This puzzle is known as an NP-complete combinatorial problem characterized by an exponential increase in the number of candidate solutions with increasing puzzle size. So far, some methods have been investigated to address these challenges, including conventional line-solving techniques, integer programming, and neural networks. This study introduces a novel Harmony Search (HS)-based approach for solving nonogram puzzles, incorporating problem-specific operators designed to effectively reduce the solution search space and accelerate convergence. Experimental results obtained from benchmark puzzles demonstrate that the proposed HS model utilizing a clue-constrained random-generation operator significantly reduces the average number of iterations and enhances the solution-finding success rate. Additionally, the HS model integrating an initially confirmed cell-scanning operator exhibited promising performance on specific benchmark problems. The authors think that the nonogram puzzle can be a good benchmark problem for quantum computing-based optimization in the future, and the proposed HS algorithm can also be combined with quantum computing mechanisms. Full article
12 pages, 257 KiB  
Article
Partial Sums of the Hurwitz and Allied Functions and Their Special Values
by Nianliang Wang, Ruiyang Li and Takako Kuzumaki
Mathematics 2025, 13(9), 1469; https://doi.org/10.3390/math13091469 (registering DOI) - 29 Apr 2025
Abstract
We supplement the formulas for partial sums of the Hurwitz zeta-function and its derivatives, producing more integral representations and generic definitions of important constants. Then, these are used, coupled with the functional equation for the completed zeta-function to clarify the results of Choudhury, [...] Read more.
We supplement the formulas for partial sums of the Hurwitz zeta-function and its derivatives, producing more integral representations and generic definitions of important constants. Then, these are used, coupled with the functional equation for the completed zeta-function to clarify the results of Choudhury, giving rise to closed expressions for the Riemann zeta-function and its derivatives. Full article
(This article belongs to the Special Issue Analytic Methods in Number Theory and Allied Fields)
22 pages, 886 KiB  
Article
A Mathematical Modeling of Time-Fractional Maxwell’s Equations Under the Caputo Definition of a Magnetothermoelastic Half-Space Based on the Green–Lindsy Thermoelastic Theorem
by Eman A. N. Al-Lehaibi
Mathematics 2025, 13(9), 1468; https://doi.org/10.3390/math13091468 (registering DOI) - 29 Apr 2025
Abstract
This study has established and resolved a new mathematical model of a homogeneous, generalized, magnetothermoelastic half-space with a thermally loaded bounding surface, subjected to ramp-type heating and supported by a solid foundation where these types of mathematical models have been widely used in [...] Read more.
This study has established and resolved a new mathematical model of a homogeneous, generalized, magnetothermoelastic half-space with a thermally loaded bounding surface, subjected to ramp-type heating and supported by a solid foundation where these types of mathematical models have been widely used in many sciences, such as geophysics and aerospace. The governing equations are formulated according to the Green–Lindsay theory of generalized thermoelasticity. This work’s uniqueness lies in the examination of Maxwell’s time-fractional equations via the definition of Caputo’s fractional derivative. The Laplace transform method has been used to obtain the solutions promptly. Inversions of the Laplace transform have been computed via Tzou’s iterative approach. The numerical findings are shown in graphs representing the distributions of the temperature increment, stress, strain, displacement, induced electric field, and induced magnetic field. The time-fractional parameter derived from Maxwell’s equations significantly influences all examined functions; however, it does not impact the temperature increase. The time-fractional parameter of Maxwell’s equations functions as a resistor to material deformation, particle motion, and the resulting magnetic field strength. Conversely, it acts as a catalyst for the stress and electric field intensity inside the material. The strength of the main magnetic field considerably influences the mechanical and electromagnetic functions; however, it has a lesser effect on the thermal function. Full article
37 pages, 1614 KiB  
Article
An Active-Set Algorithm for Convex Quadratic Programming Subject to Box Constraints with Applications in Non-Linear Optimization and Machine Learning
by Konstantinos Vogklis and Isaac E. Lagaris
Mathematics 2025, 13(9), 1467; https://doi.org/10.3390/math13091467 (registering DOI) - 29 Apr 2025
Abstract
A quadratic programming problem with positive definite Hessian subject to box constraints is solved, using an active-set approach. Convex quadratic programming (QP) problems with box constraints appear quite frequently in various real-world applications. The proposed method employs an active-set strategy with Lagrange multipliers, [...] Read more.
A quadratic programming problem with positive definite Hessian subject to box constraints is solved, using an active-set approach. Convex quadratic programming (QP) problems with box constraints appear quite frequently in various real-world applications. The proposed method employs an active-set strategy with Lagrange multipliers, demonstrating rapid convergence. The algorithm, at each iteration, modifies both the minimization parameters in the primal space and the Lagrange multipliers in the dual space. The algorithm is particularly well suited for machine learning, scientific computing, and engineering applications that require solving box constraint QP subproblems efficiently. Key use cases include Support Vector Machines (SVMs), reinforcement learning, portfolio optimization, and trust-region methods in non-linear programming. Extensive numerical experiments demonstrate the method’s superior performance in handling large-scale problems, making it an ideal choice for contemporary optimization tasks. To encourage and facilitate its adoption, the implementation is available in multiple programming languages, ensuring easy integration into existing optimization frameworks. Full article
15 pages, 2961 KiB  
Article
A Fast Proximal Alternating Method for Robust Matrix Factorization of Matrix Recovery with Outliers
by Ting Tao, Lianghai Xiao and Jiayuan Zhong
Mathematics 2025, 13(9), 1466; https://doi.org/10.3390/math13091466 - 29 Apr 2025
Abstract
This paper concerns a class of robust factorization models of low-rank matrix recovery, which have been widely applied in various fields such as machine learning and imaging sciences. An 1-loss robust factorized model incorporating the 2,0-norm regularization [...] Read more.
This paper concerns a class of robust factorization models of low-rank matrix recovery, which have been widely applied in various fields such as machine learning and imaging sciences. An 1-loss robust factorized model incorporating the 2,0-norm regularization term is proposed to address the presence of outliers. Since the resulting problem is nonconvex, nonsmooth, and discontinuous, an approximation problem that shares the same set of stationary points as the original formulation is constructed. Subsequently, a proximal alternating minimization method is proposed to solve the approximation problem. The global convergence of its iterate sequence is also established. Numerical experiments on matrix completion with outliers and image restoration tasks demonstrate that the proposed algorithm achieves low relative errors in shorter computational time, especially for large-scale datasets. Full article
25 pages, 3597 KiB  
Article
Toward Next-Generation Biologically Plausible Single Neuron Modeling: An Evolutionary Dendritic Neuron Model
by Chongyuan Wang and Huiyi Liu
Mathematics 2025, 13(9), 1465; https://doi.org/10.3390/math13091465 - 29 Apr 2025
Abstract
Conventional deep learning models rely heavily on the McCulloch–Pitts (MCP) neuron, limiting their interpretability and biological plausibility. The Dendritic Neuron Model (DNM) offers a more realistic alternative by simulating nonlinear and compartmentalized processing within dendritic branches, enabling efficient and transparent learning. While DNMs [...] Read more.
Conventional deep learning models rely heavily on the McCulloch–Pitts (MCP) neuron, limiting their interpretability and biological plausibility. The Dendritic Neuron Model (DNM) offers a more realistic alternative by simulating nonlinear and compartmentalized processing within dendritic branches, enabling efficient and transparent learning. While DNMs have shown strong performance in various tasks, their learning capacity at the single-neuron level remains underexplored. This paper proposes a Reinforced Dynamic-grouping Differential Evolution (RDE) algorithm to enhance synaptic plasticity within the DNM framework. RDE introduces a biologically inspired mutation-selection strategy and an adaptive grouping mechanism that promotes effective exploration and convergence. Experimental evaluations on benchmark classification tasks demonstrate that the proposed method outperforms conventional differential evolution and other evolutionary learning approaches in terms of accuracy, generalization, and convergence speed. Specifically, the RDE-DNM achieves up to 92.9% accuracy on the BreastEW dataset and 98.08% on the Moons dataset, with consistently low standard deviations across 30 trials, indicating strong robustness and generalization. Beyond technical performance, the proposed model supports societal applications requiring trustworthy AI, such as interpretable medical diagnostics, financial screening, and low-energy embedded systems. The results highlight the potential of RDE-driven DNMs as a compact and interpretable alternative to traditional deep models, offering new insights into biologically plausible single-neuron computation for next-generation AI. Full article
(This article belongs to the Special Issue Biologically Plausible Deep Learning)
25 pages, 1755 KiB  
Article
Financing Newsvendor with Trade Credit and Bank Credit Portfolio
by Yue Zhang, Bin Zhang and Rongguang Chen
Mathematics 2025, 13(9), 1464; https://doi.org/10.3390/math13091464 - 29 Apr 2025
Abstract
Trade credit is a crucial component of supply chain financing, enabling businesses to manage cash flow and optimize inventory levels. This study delves into the application and implications of multiple trade credit types with different repayment periods and financing costs in a supply [...] Read more.
Trade credit is a crucial component of supply chain financing, enabling businesses to manage cash flow and optimize inventory levels. This study delves into the application and implications of multiple trade credit types with different repayment periods and financing costs in a supply chain, encompassing short-term trade credit concatenated with bank financing, long-term trade credit, and a trade credit portfolio. Using a two-stage newsvendor model, we analyze the impact of different trade credit types on supply chain profitability under various scenarios. When facing multiple trade credit types, the retailer prefers financing from the trade credit type that has a lower marginal cost, and the resulting form of financing ensures an equal expected cost of each financing type. The analysis shows that in the case of a monopoly supplier, a long-term credit supplier’s profit is higher than that of a short-term credit supplier. Meanwhile, when the bank interest rate is sufficiently high, the retailer’s profit is highest under the trade credit portfolio mode, whereas when the bank interest rate is sufficiently low, the retailer’s profit is highest under the single short-term credit model. Comparing the effects of different financing modes, we find that there is no optimal financing mode for the overall profit of the supply chain. Full article
13 pages, 389 KiB  
Article
MLKGC: Large Language Models for Knowledge Graph Completion Under Multimodal Augmentation
by Pengfei Yue, Hailiang Tang, Wanyu Li, Wenxiao Zhang and Bingjie Yan
Mathematics 2025, 13(9), 1463; https://doi.org/10.3390/math13091463 - 29 Apr 2025
Abstract
Knowledge graph completion (KGC) is a critical task for addressing the incompleteness of knowledge graphs and supporting downstream applications. However, it faces significant challenges, including insufficient structured information and uneven entity distribution. Although existing methods have alleviated these issues to some extent, they [...] Read more.
Knowledge graph completion (KGC) is a critical task for addressing the incompleteness of knowledge graphs and supporting downstream applications. However, it faces significant challenges, including insufficient structured information and uneven entity distribution. Although existing methods have alleviated these issues to some extent, they often rely heavily on extensive training and fine-tuning, which results in low efficiency. To tackle these challenges, we introduce our MLKGC framework, a novel approach that combines large language models (LLMs) with multi-modal modules (MMs). LLMs leverage their advanced language understanding and reasoning abilities to enrich the contextual information for KGC, while MMs integrate multi-modal data, such as audio and images, to bridge knowledge gaps. This integration augments the capability of the model to address long-tail entities, enhances its reasoning processes, and facilitates more robust information integration through the incorporation of diverse inputs. By harnessing the synergy between LLMs and MMs, our approach reduces dependence on traditional text-based training and fine-tuning, providing a more efficient and accurate solution for KGC tasks. It also offers greater flexibility in addressing complex relationships and diverse entities. Extensive experiments on multiple benchmark KGC datasets demonstrate that MLKGC effectively leverages the strengths of both LLMs and multi-modal data, achieving superior performance in link-prediction tasks. Full article
(This article belongs to the Special Issue Advances in Trustworthy and Robust Artificial Intelligence)
18 pages, 283 KiB  
Article
Inferred Loss Rate as a Credit Risk Measure in the Bulgarian Banking System
by Vilislav Boutchaktchiev
Mathematics 2025, 13(9), 1462; https://doi.org/10.3390/math13091462 - 29 Apr 2025
Abstract
The loss rate of a bank’s portfolio traditionally measures what portion of the exposure is lost in the case of a default. To overcome the difficulties involved in its computation due to, e.g., the lack of private data, one can utilize an inferred [...] Read more.
The loss rate of a bank’s portfolio traditionally measures what portion of the exposure is lost in the case of a default. To overcome the difficulties involved in its computation due to, e.g., the lack of private data, one can utilize an inferred loss rate (ILR). In the existing literature, it has been demonstrated that this indicator has sufficiently close properties to the actual loss rate to facilitate capital adequacy analysis. The current study provides complete mathematical proof of an earlier-stated conjecture, that ILR can be instrumental in identifying a conservative upper bound of the capital adequacy requirement of a bank credit portfolio, using the law of large numbers and other techniques from measure-theory-based probability. The assumptions required in this proof are less restrictive, reflecting a more realistic view. In the current study, additional empirical evidence of the usefulness of the indicator is provided, using publicly available data from the Bulgarian National Bank. Despite the definite conservativeness of the capital buffer implied from the analysis of ILR, the empirical analysis suggests that it is still within the regulatory limits. Analyzing ILR together with the Inferred Rate of Default, we conclude that the indicator provides signals about a bank portfolio’s credit risk that are relevant, timely, and adequately inexpensive. Full article
(This article belongs to the Section E: Applied Mathematics)
28 pages, 1692 KiB  
Article
A Refined Spectral Galerkin Approach Leveraging Romanovski–Jacobi Polynomials for Differential Equations
by Ramy M. Hafez, Mohamed A. Abdelkawy and Hany M. Ahmed
Mathematics 2025, 13(9), 1461; https://doi.org/10.3390/math13091461 - 29 Apr 2025
Abstract
This study explores the application of Romanovski–Jacobi polynomials (RJPs) in spectral Galerkin methods (SGMs) for solving differential equations (DEs). It uses a suitable class of modified RJPs as basis functions that meet the homogeneous initial conditions (ICs) given. We derive spectral Galerkin schemes [...] Read more.
This study explores the application of Romanovski–Jacobi polynomials (RJPs) in spectral Galerkin methods (SGMs) for solving differential equations (DEs). It uses a suitable class of modified RJPs as basis functions that meet the homogeneous initial conditions (ICs) given. We derive spectral Galerkin schemes based on modified RJP expansions to solve three models of high-order ordinary differential equations (ODEs) and partial differential equations (PDEs) of first and second orders with ICs. We provide theoretical assurances of the treatment’s efficacy by validating its convergent and error investigations. The method achieves enhanced accuracy, spectral convergence, and computational efficiency. Numerical experiments demonstrate the robustness of this approach in addressing complex physical and engineering problems, highlighting its potential as a powerful tool to obtain accurate numerical solutions for various types of DEs. The findings are compared to those of preceding studies, verifying that our treatment is more effective and precise than that of its competitors. Full article
33 pages, 6362 KiB  
Article
SG-ResNet: Spatially Adaptive Gabor Residual Networks with Density-Peak Guidance for Joint Image Steganalysis and Payload Location
by Zhengliang Lai, Chenyi Wu, Xishun Zhu, Jianhua Wu and Guiqin Duan
Mathematics 2025, 13(9), 1460; https://doi.org/10.3390/math13091460 - 29 Apr 2025
Abstract
Image steganalysis detects hidden information in digital images by identifying statistical anomalies, serving as a forensic tool to reveal potential covert communication. The field of deep learning-based image steganography has relatively scarce effective steganalysis methods, particularly those designed to extract hidden information. This [...] Read more.
Image steganalysis detects hidden information in digital images by identifying statistical anomalies, serving as a forensic tool to reveal potential covert communication. The field of deep learning-based image steganography has relatively scarce effective steganalysis methods, particularly those designed to extract hidden information. This paper introduces an innovative image steganalysis method based on generative adaptive Gabor residual networks with density-peak guidance (SG-ResNet). SG-ResNet employs a dual-stream collaborative architecture to achieve precise detection and reconstruction of steganographic information. The classification subnet utilizes dual-frequency adaptive Gabor convolutional kernels to decouple high-frequency texture and low-frequency contour components in images. It combines a density peak clustering with three quantization and transformation-enhanced convolutional blocks to generate steganographic covariance matrices, enhancing the weak steganographic signals. The reconstruction subnet synchronously constructs multi-scale features, preserves steganographic spatial fingerprints with channel-separated residual spatial rich model and pixel reorganization operators, and achieves sub-pixel-level steganographic localization via iterative optimization mechanism of feedback residual modules. Experimental results obtained with datasets generated by several public steganography algorithms demonstrate that SG-ResNet achieves State-of-the-Art results in terms of detection accuracy, with 0.94, and with a PSNR of 29 between reconstructed and original secret images. Full article
(This article belongs to the Special Issue New Solutions for Multimedia and Artificial Intelligence Security)
20 pages, 1995 KiB  
Article
Design and Optimization of Hybrid CNN-DT Model-Based Network Intrusion Detection Algorithm Using Deep Reinforcement Learning
by Lu Qiu, Zhiping Xu, Lixiong Lin, Jiachun Zheng and Jiahui Su
Mathematics 2025, 13(9), 1459; https://doi.org/10.3390/math13091459 - 29 Apr 2025
Abstract
With the rapid development of network technology, modern systems are facing increasingly complex security threats, which motivates researchers to continuously explore more advanced intrusion detection systems (IDSs). Even though they work effectively in some situations, the existing IDSs based on machine learning or [...] Read more.
With the rapid development of network technology, modern systems are facing increasingly complex security threats, which motivates researchers to continuously explore more advanced intrusion detection systems (IDSs). Even though they work effectively in some situations, the existing IDSs based on machine learning or deep learning still struggle with detection accuracy and generalization. To address these challenges, this study proposes an innovative network intrusion detection algorithm that combines convolutional neural networks (CNNs) and decision trees (DTs) together, named CNN-DT algorithm. In the CNN-DT algorithm, CNN extracts high-level features from data packets first, then the decision tree quickly determines the presence of intrusions based on these high-level features, while providing a clear decision path. Moreover, the study proposes a novel adaptive hybrid pooling mechanism that integrates maximal pooling, average pooling, and global maximal pooling. The hyperparameters of the CNN network are also optimized by actor–critic (AC) deep reinforcement learning algorithm (DRL). The experimental results show that the CNN–decision tree (DT) algorithm optimized by actor–critic (AC) achieves an accuracy of 0.9792 on the KDD dataset, which is 5.63% higher than the unoptimized CNN-DT model. Full article
21 pages, 11194 KiB  
Article
A Dynamic Regional-Aggregation-Based Heterogeneous Graph Neural Network for Traffic Prediction
by Xiangting Liu, Chengyuan Qian and Xueyang Zhao
Mathematics 2025, 13(9), 1458; https://doi.org/10.3390/math13091458 - 29 Apr 2025
Abstract
Traffic flow prediction, crucial for intelligent transportation systems, has seen advancements with graph neural networks (GNNs), yet existing methods often fail to distinguish between the importance of different intersections. These methods usually model all intersections uniformly, overlooking significant differences in traffic flow characteristics [...] Read more.
Traffic flow prediction, crucial for intelligent transportation systems, has seen advancements with graph neural networks (GNNs), yet existing methods often fail to distinguish between the importance of different intersections. These methods usually model all intersections uniformly, overlooking significant differences in traffic flow characteristics and influence ranges between ordinary and important nodes. To tackle this, this study introduces a dynamic regional-aggregation-based heterogeneous graph neural network (DR-HGNN). This model categorizes intersections into two types—ordinary and important—to apply tailored feature aggregation strategies. Ordinary intersections aggregate features based on local neighborhood information, whereas important intersections utilize deeper neighborhood diffusion and multi-hop dependencies to capture broader traffic influences. The DR-HGNN model also employs a dynamic graph structure to reflect temporal changes in traffic flows, alongside an attention mechanism for adaptive regional feature aggregation, enhancing the identification of critical traffic nodes. Demonstrating its efficacy, the DR-HGNN achieved 19.2% and 15.4% improvements in the RMSE over 50 min predictions in the METR-LA and PEMS-BAY datasets, respectively, offering a more precise prediction method for traffic management. Full article
(This article belongs to the Special Issue Symmetries of Integrable Systems, 2nd Edition)
Show Figures

Figure 1

30 pages, 8298 KiB  
Article
Detecting Clinical Risk Shift Through log–logistic Hazard Change-Point Model
by Shobhana Selvaraj Nadar, Vasudha Upadhyay and Savitri Joshi
Mathematics 2025, 13(9), 1457; https://doi.org/10.3390/math13091457 - 29 Apr 2025
Abstract
The change–point problem is about identifying when a pattern or trend shifts in time–ordered data. In survival analysis, change–point detection focuses on identifying alterations in the distribution of time–to–event data, which may be subject to censoring or truncation. In this paper, we introduce [...] Read more.
The change–point problem is about identifying when a pattern or trend shifts in time–ordered data. In survival analysis, change–point detection focuses on identifying alterations in the distribution of time–to–event data, which may be subject to censoring or truncation. In this paper, we introduce a change–point in the hazard rate of the log–logistic distribution. The log–logistic distribution is a flexible probability distribution used in survival analysis, reliability engineering, and economics. It is particularly useful for modeling time–to–event data exhibiting decreasing hazard rates. We estimate the parameters of the proposed change–point model using profile maximum likelihood estimation. We also carry out a simulation study and Bayesian analysis using the Metropolis–Hastings algorithm to study the properties of the proposed estimators. The proposed log–logistic change–point model is applied to survival data from kidney catheter patients and acute myeloid leukemia (AML) cases. A late change–point with a decreasing scale parameter in the catheter data reflects an abrupt increase in risk due to delayed complications, whereas an early change–point with an increasing scale parameter in AML indicates high early mortality followed by slower hazard progression in survivors. We find that the log–logistic change–point model performs better in comparison to the existing change–point models. Full article
(This article belongs to the Special Issue Advances in Statistical Methods with Applications)
Show Figures

Figure 1

25 pages, 866 KiB  
Article
Hybrid Deep Neural Network with Domain Knowledge for Text Sentiment Analysis
by Jawad Khan, Niaz Ahmad, Youngmoon Lee, Shah Khalid and Dildar Hussain
Mathematics 2025, 13(9), 1456; https://doi.org/10.3390/math13091456 - 29 Apr 2025
Abstract
Sentiment analysis (SA) analyzes online data to uncover insights for better decision-making. Conventional text SA techniques are effective and easy to understand but encounter difficulties when handling sparse data. Deep Neural Networks (DNNs) excel in handling data sparsity but face challenges with high-dimensional, [...] Read more.
Sentiment analysis (SA) analyzes online data to uncover insights for better decision-making. Conventional text SA techniques are effective and easy to understand but encounter difficulties when handling sparse data. Deep Neural Networks (DNNs) excel in handling data sparsity but face challenges with high-dimensional, noisy data. Incorporating rich domain semantic and sentiment knowledge is crucial for advancing sentiment analysis. To address these challenges, we propose an innovative hybrid sentiment analysis approach that combines established DNN models like RoBERTA and BiGRU with an attention mechanism, alongside traditional feature engineering and dimensionality reduction through PCA. This leverages the strengths of both techniques: DNNs handle complex semantics and dynamic features, while conventional methods shine in interpretability and efficient sentiment extraction. This complementary combination fosters a robust and accurate sentiment analysis model. Our model is evaluated on four widely used real-world benchmark text sentiment analysis datasets: MR, CR, IMDB, and SemEval 2013. The proposed hybrid model achieved impressive results on these datasets. These findings highlight the effectiveness of this approach for text sentiment analysis tasks, demonstrating its ability to improve sentiment analysis performance compared to previously proposed methods. Full article
(This article belongs to the Special Issue High-Dimensional Data Analysis and Applications)
Show Figures

Figure 1

22 pages, 12508 KiB  
Article
Investigating the Impact of Structural Features on F1 Car Diffuser Performance Using Computational Fluid Dynamics (CFD)
by Eugeni Pérez Nebot, Antim Gupta and Mahak Mahak
Mathematics 2025, 13(9), 1455; https://doi.org/10.3390/math13091455 - 29 Apr 2025
Abstract
This study utilizes Computational Fluid Dynamics (CFD) to optimize the aerodynamic performance of a Formula 1 (F1) car diffuser, investigating the effects of vane placements, end-flap positions, and other structural modifications. Diffusers are critical in managing airflow, enhancing downforce, and reducing drag, directly [...] Read more.
This study utilizes Computational Fluid Dynamics (CFD) to optimize the aerodynamic performance of a Formula 1 (F1) car diffuser, investigating the effects of vane placements, end-flap positions, and other structural modifications. Diffusers are critical in managing airflow, enhancing downforce, and reducing drag, directly influencing vehicle stability and speed. Despite ongoing advancements, the interaction between diffuser designs and turbulent flow dynamics requires further exploration. A Three-Dimensional k-Omega-SST RANS-based CFD methodology was developed to evaluate the aerodynamic performance of various diffuser configurations using Star CCM+. The findings reveal that adding lateral vane parallel to the divergence section improved high-intensity fluid flow distribution within the main channel, achieving 13.49% increment in downforce and 5.58% reduction in drag compared to the baseline simulation. However, incorporating an airfoil cross-section flap parallel to the divergence end significantly enhances the car’s performance, leading to a substantial improvement in downforce while relatively small increase in drag force. This underscores the critical importance of precise flap positioning for optimizing aerodynamic efficiency. Additionally, the influence of adding flaps underneath the divergence section was also analyzed to manipulate boundary layer separation to achieve improved performance by producing additional downforce. This research emphasizes the critical role of vortex management in preventing flow detachment and improving diffuser efficiency. The findings offer valuable insights for potential FIA F1 2023 undertray regulation changes, with implications for faster lap times and heightened competitiveness in motorsports. Full article
Show Figures

Figure 1

17 pages, 2685 KiB  
Article
DAF-UNet: Deformable U-Net with Atrous-Convolution Feature Pyramid for Retinal Vessel Segmentation
by Yongchao Duan, Rui Yang, Ming Zhao, Mingrui Qi and Sheng-Lung Peng
Mathematics 2025, 13(9), 1454; https://doi.org/10.3390/math13091454 - 29 Apr 2025
Abstract
Segmentation of retinal vessels from fundus images is critical for diagnosing diseases such as diabetes and hypertension. However, the inherent challenges posed by the complex geometries of vessels and the highly imbalanced distribution of thick versus thin vessel pixels demand innovative solutions for [...] Read more.
Segmentation of retinal vessels from fundus images is critical for diagnosing diseases such as diabetes and hypertension. However, the inherent challenges posed by the complex geometries of vessels and the highly imbalanced distribution of thick versus thin vessel pixels demand innovative solutions for robust feature extraction. In this paper, we introduce DAF-UNet, a novel architecture that integrates advanced modules to address these challenges. Specifically, our method leverages a pre-trained deformable convolution (DC) module within the encoder to dynamically adjust the sampling positions of the convolution kernel, thereby adapting the receptive field to capture irregular vessel morphologies more effectively than traditional convolutional approaches. At the network’s bottleneck, an enhanced atrous spatial pyramid pooling (ASPP) module is employed to extract and fuse rich, multi-scale contextual information, significantly improving the model’s capacity to delineate vessels of varying calibers. Furthermore, we propose a hybrid loss function that combines pixel-level and segment-level losses to robustly address the segmentation inconsistencies caused by the disparity in vessel thickness. Experimental evaluations on the DRIVE and CHASE_DB1 datasets demonstrated that DAF-UNet achieved a global accuracy of 0.9572/0.9632 and a Dice score of 0.8298/0.8227, respectively, outperforming state-of-the-art methods. These results underscore the efficacy of our approach in precisely capturing fine vascular details and complex boundaries, marking a significant advancement in retinal vessel segmentation. Full article
(This article belongs to the Special Issue Mathematics Methods in Image Processing and Computer Vision)
Show Figures

Figure 1

18 pages, 656 KiB  
Article
A Trusted Measurement Scheme for Connected Vehicles Based on Trust Classification and Trust Reverse
by Zipeng Diao, Mengxiang Wang, Qiang Fu, Bei Gong and Meng Chen
Mathematics 2025, 13(9), 1453; https://doi.org/10.3390/math13091453 - 28 Apr 2025
Abstract
As security issues in vehicular networks continue to intensify, ensuring the trustworthiness of message exchanges among vehicles, infrastructure, and cloud platforms has become increasingly critical. Although trust authentication serves as a fundamental solution to this challenge, existing models fail to effectively address the [...] Read more.
As security issues in vehicular networks continue to intensify, ensuring the trustworthiness of message exchanges among vehicles, infrastructure, and cloud platforms has become increasingly critical. Although trust authentication serves as a fundamental solution to this challenge, existing models fail to effectively address the specific requirements of vehicular networks, particularly in defending against malicious evaluations. This paper proposes a novel multidimensional trust evaluation framework that integrates both static and dynamic metrics. To tackle the issue of malicious ratings in peer assessments, a rating reversal mechanism based on K-means clustering is designed to effectively identify and correct abnormal trust feedback. In addition, the framework incorporates an entropy-based trust weight allocation mechanism and a time decay model to enhance adaptability in dynamic environments. The simulation results demonstrate that, compared with traditional approaches, the proposed scheme improves the average successful information rate by 12% and reduces the false positive rate to 6.1%, confirming its superior performance in securing communications within the vehicular network ecosystem. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
16 pages, 1084 KiB  
Article
Linearly Coupled Quantum Harmonic Oscillators and Their Quantum Entanglement
by Dmitry Makarov and Ksenia Makarova
Mathematics 2025, 13(9), 1452; https://doi.org/10.3390/math13091452 - 28 Apr 2025
Viewed by 4
Abstract
In many applications of quantum optics, nonlinear physics, molecular chemistry and biophysics, one can encounter models in which the coupled quantum harmonic oscillator provides an explanation for many physical phenomena and effects. In general, these are harmonic oscillators coupled via coordinates and momenta, [...] Read more.
In many applications of quantum optics, nonlinear physics, molecular chemistry and biophysics, one can encounter models in which the coupled quantum harmonic oscillator provides an explanation for many physical phenomena and effects. In general, these are harmonic oscillators coupled via coordinates and momenta, which can be represented as H^=i=12p^i22mi+miωi22xi2+H^int, where the interaction of two oscillators H^int=ik1x1p^2+ik2x2p^1+k3x1x2k4p^1p^2. Despite the importance of this system, there is currently no general solution to the Schrödinger equation that takes into account arbitrary initial states of the oscillators. Here, this problem is solved in analytical form, and it is shown that the probability of finding the system in any states and quantum entanglement depends only on one coefficient R(0,1) for the initial factorizable Fock states of the oscillator and depends on two parameters R(0,1) and ϕ for arbitrary initial states. These two parameters R(0,1) and ϕ include the entire set of variables of the system under consideration. Full article
Show Figures

Figure 1

23 pages, 2147 KiB  
Article
Precision Fixed-Time Formation Control for Multi-AUV Systems with Full State Constraints
by Yuanfeng Chen, Haoyuan Wang and Xiaodong Wang
Mathematics 2025, 13(9), 1451; https://doi.org/10.3390/math13091451 - 28 Apr 2025
Viewed by 24
Abstract
The trajectory tracking the control of autonomous underwater vehicle (AUV) systems faces considerable challenges due to strong inter-axis coupling and complex time-varying external disturbances. This paper proposes a novel fixed-time control scheme incorporating a switching threshold-based event-driven strategy to address critical issues in [...] Read more.
The trajectory tracking the control of autonomous underwater vehicle (AUV) systems faces considerable challenges due to strong inter-axis coupling and complex time-varying external disturbances. This paper proposes a novel fixed-time control scheme incorporating a switching threshold-based event-driven strategy to address critical issues in multi-AUV formation control, including full-state constraints, unmeasurable states, model uncertainties, limited communication resources, and unknown time-varying disturbances. A rapid and stable dimensional augmented state observer (RSDASO) was first developed to achieve fixed-time convergence in estimating aggregated disturbances and unmeasurable states. Subsequently, a logarithmic barrier Lyapunov function was constructed to derive a fixed-time control law that guarantees bounded system errors within a predefined interval while strictly confining all states to specified constraints. The introduction of a switching threshold event-triggering mechanism (ETM) significantly reduced communication resource consumption. The simulation results demonstrate the effectiveness of the proposed method in improving control accuracy while substantially lowering communication overhead. Full article
Show Figures

Figure 1

25 pages, 329 KiB  
Article
Hyers–Ulam Stability Results of Solutions for a Multi-Point φ-Riemann-Liouville Fractional Boundary Value Problem
by Hicham Ait Mohammed, Safa M. Mirgani, Brahim Tellab, Abdelkader Amara, Mohammed El-Hadi Mezabia, Khaled Zennir and Keltoum Bouhali
Mathematics 2025, 13(9), 1450; https://doi.org/10.3390/math13091450 - 28 Apr 2025
Viewed by 45
Abstract
In this study, we investigate the existence, uniqueness, and Hyers–Ulam stability of a multi-term boundary value problem involving generalized φ-Riemann–Liouville operators. The uniqueness of the solution is demonstrated using Banach’s fixed-point theorem, while the existence is established through the application of classical [...] Read more.
In this study, we investigate the existence, uniqueness, and Hyers–Ulam stability of a multi-term boundary value problem involving generalized φ-Riemann–Liouville operators. The uniqueness of the solution is demonstrated using Banach’s fixed-point theorem, while the existence is established through the application of classical fixed-point theorems by Krasnoselskii. We then delve into the Hyers–Ulam stability of the solutions, an aspect that has garnered significant attention from various researchers. By adapting certain sufficient conditions, we achieve stability results for the Hyers–Ulam (HU) type. Finally, we illustrate the theoretical findings with examples to enhance understanding. Full article
14 pages, 593 KiB  
Article
Optimal Zero-Defect Solution for Multiple Inspection Items in Incoming Quality Control
by Wenqing Zhou and Yufeng Chen
Mathematics 2025, 13(9), 1449; https://doi.org/10.3390/math13091449 - 28 Apr 2025
Viewed by 28
Abstract
This paper addresses the issues related to inaccurate inspections and high costs in incoming quality control. Incoming quality control refers to the initial inspection process that verifies whether externally provided products, materials, or services comply with specified quality requirements. Traditional methods inspect each [...] Read more.
This paper addresses the issues related to inaccurate inspections and high costs in incoming quality control. Incoming quality control refers to the initial inspection process that verifies whether externally provided products, materials, or services comply with specified quality requirements. Traditional methods inspect each item in sequence for a given part and terminate the inspection upon detecting a non-conforming item before proceeding to the next part. To reduce inspection times, we propose a novel approach termed ‘selection of minimal inspection items’, which formulates the selection of inspection items for a batch of parts as decision variables. This approach ensures that all non-conforming parts are detected while minimizing the total number of inspection items. We identify all the inspection items in the initial batch that cover all the non-conforming parts, then develop a set-covering approach to select the minimum inspection items that cover all non-conforming parts. Subsequently, the next batch of parts is inspected using the selected inspection items to identify as many non-conforming parts as possible. Compared to traditional inspection techniques, this approach demonstrates greater cost-effectiveness. Furthermore, we conduct experiments under scenarios with varying numbers of parts and inspection items across different batches to achieve zero-defect inspection, which ensures all non-conforming parts are identified and eliminated through systematic quality control procedures. Algorithms and programs are developed to implement the reported approach. The experimental results show that the proposed approach significantly reduces inspection times while maintaining high quality. Full article
Show Figures

Figure 1

17 pages, 3569 KiB  
Article
Fairness in Healthcare Services for Italian Older People: A Convolution-Based Evaluation to Support Policy Decision Makers
by Davide Donato Russo, Frida Milella and Giuseppe Di Felice
Mathematics 2025, 13(9), 1448; https://doi.org/10.3390/math13091448 - 28 Apr 2025
Viewed by 40
Abstract
In Italy, the current demographic transition makes it a strategic goal to realign the distribution of health services based on the population aged over 65. The traditional challenge of achieving a fine-grained assessment of health resource statistics and evaluating the fairness of health [...] Read more.
In Italy, the current demographic transition makes it a strategic goal to realign the distribution of health services based on the population aged over 65. The traditional challenge of achieving a fine-grained assessment of health resource statistics and evaluating the fairness of health services across regions is a concern in current research on the fairness of health services. In this study, the authors propose a methodological approach to foster a novel analysis of fairness in the allocation of primary health care services in Italy with a specific focus on the population aged 65 or over, which facilitates the processing of extensive administrative and demographic data to ensure a clear and precise visualization for informed decision making. The proposed methodology integrates convolution matrices weighted by aged population density within a fine-grained geographic grid representation. This approach is combined with an image convolution technique for filtering, enabling an effective estimation of health resource impact and a clear visualization of their spatial distribution across geographical areas. The integration of several data sources to evaluate the equity in accessibility distribution through the Gini index is also exploited to quantify the disparity between healthcare service provision and the aged population at the regional district level. Our findings showed a substantial unfairness in service distribution, with a concentration of healthcare effect in prominent regions such as Campania, Lazio, and Lombardia, indicating that healthcare accessibility is predominantly disproportionate in Italy, particularly for the population aged over 65. Full article
(This article belongs to the Special Issue Improved Mathematical Methods in Decision Making Models)
Show Figures

Figure 1

31 pages, 363 KiB  
Article
Dynamic Stepsize Techniques in DR-Submodular Maximization
by Yanfei Li, Min Li, Qian Liu and Yang Zhou
Mathematics 2025, 13(9), 1447; https://doi.org/10.3390/math13091447 - 28 Apr 2025
Viewed by 28
Abstract
The Diminishing-Return (DR)-submodular function maximization problem has garnered significant attention across various domains in recent years. Classic methods often employ continuous greedy or Frank–Wolfe approaches to tackle this problem; however, high iteration and subproblem solver complexity are typically required to control the approximation [...] Read more.
The Diminishing-Return (DR)-submodular function maximization problem has garnered significant attention across various domains in recent years. Classic methods often employ continuous greedy or Frank–Wolfe approaches to tackle this problem; however, high iteration and subproblem solver complexity are typically required to control the approximation ratio effectively. In this paper, we introduce a strategy that employs a binary search to find the dynamic stepsize, integrating it into traditional algorithm frameworks to address problems with different constraint types. We demonstrate that algorithms using this dynamic stepsize strategy can achieve comparable approximation ratios to those using a fixed stepsize strategy. In the monotone case, the iteration complexity is OF(0)1ϵ1, while in the non-monotone scenario, it is On+F(0)1ϵ1, where F denotes the objective function. We then apply this strategy to solving stochastic DR-submodular function maximization problems, obtaining corresponding iteration complexity results in a high-probability form. Furthermore, theoretical examples as well as numerical experiments validate that this stepsize selection strategy outperforms the fixed stepsize strategy. Full article
(This article belongs to the Special Issue Optimization Theory, Method and Application, 2nd Edition)
Show Figures

Figure 1

16 pages, 618 KiB  
Article
Non-Iterative Estimation of Multiscale Geographically and Temporally Weighted Regression Model
by Ya-Di Dai and Hui-Guo Zhang
Mathematics 2025, 13(9), 1446; https://doi.org/10.3390/math13091446 - 28 Apr 2025
Viewed by 44
Abstract
The Multiscale Geographically and Temporally Weighted Regression model overcomes the limitation of estimating spatiotemporal variation characteristics of regression coefficients for different variables under a single scale, making it a powerful tool for exploring the spatiotemporal scale characteristics of regression relationships. Currently, the most [...] Read more.
The Multiscale Geographically and Temporally Weighted Regression model overcomes the limitation of estimating spatiotemporal variation characteristics of regression coefficients for different variables under a single scale, making it a powerful tool for exploring the spatiotemporal scale characteristics of regression relationships. Currently, the most widely used estimation method for multiscale spatiotemporal geographically weighted models is the backfitting-based iterative approach. However, the iterative process of this method leads to a substantial computational burden and the accumulation of errors during iteration. This paper proposes a non-iterative estimation method for the MGTWR model, combining local linear fitting and two-step weighted least squares estimation techniques. Initially, a reduced bandwidth is used to fit a local linear GTWR model to obtain the initial estimates. Then, for each covariate, the optimal bandwidth and regression coefficients are estimated by substituting the initial estimates into a localized least squares problem. Simulation experiments are conducted to evaluate the performance of the proposed non-iterative method compared to traditional methods and the backfitting-based approach in terms of coefficient estimation accuracy and computational efficiency. The results demonstrate that the non-iterative estimation method for MGTWR significantly enhances computational efficiency while effectively capturing the scale effects of spatiotemporal variation in the regression coefficient functions for each predictor. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

32 pages, 15292 KiB  
Article
Compression Ratio as Picture-Wise Just Noticeable Difference Predictor
by Nenad Stojanović, Boban Bondžulić, Vladimir Lukin, Dimitrije Bujaković, Sergii Kryvenko and Oleg Ieremeiev
Mathematics 2025, 13(9), 1445; https://doi.org/10.3390/math13091445 - 28 Apr 2025
Viewed by 47
Abstract
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak [...] Read more.
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak signal-to-noise ratio, PSNR) and image representation in bits per pixel (bpp). In this analysis, the results of subjective tests from four publicly available databases are used as ground truth for comparison with the results obtained using the compression ratio as a predictor. Through a wide analysis of color and grayscale infrared JPEG and Better Portable Graphics (BPG) compressed images, the values of parameters that control these two types of compression and for which CR is calculated are proposed. It is shown that PSNR and bpp predictions can be significantly improved by using CR calculated using these proposed values, regardless of the type of compression and whether color or infrared images are used. In this paper, CR is used for the first time in predicting the boundary between visually lossless and visually lossy compression for images from the infrared part of the electromagnetic spectrum, as well as in the prediction of BPG compressed content. This paper indicates the great potential of CR so that in future research, it can be used in joint prediction based on several features or through the CR curve obtained for different values of the parameters controlling the compression. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

19 pages, 3165 KiB  
Article
Improving Scheduling Efficiency: A Mathematical Approach to Multi-Operation Optimization in MSMEs
by Reyner Pérez-Campdesuñer, Alexander Sánchez-Rodríguez, Margarita De Miguel-Guzmán, Gelmar García-Vidal and Rodobaldo Martínez-Vivar
Mathematics 2025, 13(9), 1444; https://doi.org/10.3390/math13091444 - 28 Apr 2025
Viewed by 66
Abstract
Optimizing the use of resources is a key aspect of organizational management. Various methods have been developed and applied to optimize different variables, including sequencing methods that aim to minimize work time. This paper presents an integrated approach for optimizing the sequencing of [...] Read more.
Optimizing the use of resources is a key aspect of organizational management. Various methods have been developed and applied to optimize different variables, including sequencing methods that aim to minimize work time. This paper presents an integrated approach for optimizing the sequencing of operations, considering indicators such as usage time, completion time, waiting time, delivery delay, and flow time. A multi-criteria optimization method with weighted aggregation was used, employing either an exhaustive search or a heuristic algorithm with nested loops, in which multiple possible combinations of operational sequences were evaluated, considering several key indicators and their respective weights. The application of the methodology in a press validated its effectiveness, providing managers with key information to prioritize the indicators according to their needs, whether optimizing resource usage or minimizing waiting times and delays. The application resulted in a 95.3% improvement in the level of utilization; a 79.3% reduction in the average completion time; a 90.5% reduction in machine waiting time; and a 90.9% decrease in product delivery delay. The results show that prioritizing the objective function leads to a balanced optimization of all indicators, improving operational efficiency and reducing flow time. This study contributes to the body of knowledge on production scheduling by offering a novel multi-criteria optimization approach in manufacturing settings. The validated methodology can be adapted to a variety of industries and offers flexibility to align with the specific interests of each organization. Full article
Show Figures

Figure 1

12 pages, 269 KiB  
Article
A Weak Solution for a Nonlinear Fourth-Order Elliptic System with Variable Exponent Operators and Hardy Potential
by Khaled Kefi and Mohamad M. Al-Shomrani
Mathematics 2025, 13(9), 1443; https://doi.org/10.3390/math13091443 - 28 Apr 2025
Viewed by 39
Abstract
In this paper, we investigate the existence of at least one weak solution for a nonlinear fourth-order elliptic system involving variable exponent biharmonic and Laplacian operators. The problem is set in a bounded domain DRN (N3) [...] Read more.
In this paper, we investigate the existence of at least one weak solution for a nonlinear fourth-order elliptic system involving variable exponent biharmonic and Laplacian operators. The problem is set in a bounded domain DRN (N3) with homogeneous Dirichlet boundary conditions. A key feature of the system is the presence of a Hardy-type singular term with a variable exponent, where δ(x) represents the distance from x to the boundary D. By employing a critical point theorem in the framework of variable exponent Sobolev spaces, we establish the existence of a weak solution whose norm vanishes at zero. Full article
17 pages, 421 KiB  
Article
CNN-Based End-to-End CPU-AP-UE Power Allocation for Spectral Efficiency Enhancement in Cell-Free Massive MIMO Networks
by Yoon-Ju Choi, Ji-Hee Yu, Seung-Hwan Seo, Seong-Gyun Choi, Hye-Yoon Jeong, Ja-Eun Kim, Myung-Sun Baek, Young-Hwan You and Hyoung-Kyu Song
Mathematics 2025, 13(9), 1442; https://doi.org/10.3390/math13091442 - 28 Apr 2025
Viewed by 88
Abstract
Cell-free massive multiple-input multiple-output (MIMO) networks eliminate cell boundaries and enhance uniform quality of service by enabling cooperative transmission among access points (APs). In conventional cellular networks, user equipment located at the cell edge experiences severe interference and unbalanced resource allocation. However, in [...] Read more.
Cell-free massive multiple-input multiple-output (MIMO) networks eliminate cell boundaries and enhance uniform quality of service by enabling cooperative transmission among access points (APs). In conventional cellular networks, user equipment located at the cell edge experiences severe interference and unbalanced resource allocation. However, in cell-free massive MIMO networks, multiple access points cooperatively serve user equipment (UEs), effectively mitigating these issues. Beamforming and cooperative transmission among APs are essential in massive MIMO environments, making efficient power allocation a critical factor in determining overall network performance. In particular, considering power allocation from the central processing unit (CPU) to the APs enables optimal power utilization across the entire network. Traditional power allocation methods such as equal power allocation and max–min power allocation fail to fully exploit the cooperative characteristics of APs, leading to suboptimal network performance. To address this limitation, in this study we propose a convolutional neural network (CNN)-based power allocation model that optimizes both CPU-to-AP power allocation and AP-to-UE power distribution. The proposed model learns the optimal power allocation strategy by utilizing the channel state information, AP-UE distance, interference levels, and signal-to-interference-plus-noise ratio as input features. Simulation results demonstrate that the proposed CNN-based power allocation method significantly improves spectral efficiency compared to conventional power allocation techniques while also enhancing energy efficiency. This confirms that deep learning-based power allocation can effectively enhance network performance in cell-free massive MIMO environments. Full article
Show Figures

Figure 1

22 pages, 1402 KiB  
Article
Dual-Population Cooperative Correlation Evolutionary Algorithm for Constrained Multi-Objective Optimization
by Junming Chen, Yanxiu Wang, Zichun Shao, Hui Zeng and Siyuan Zhao
Mathematics 2025, 13(9), 1441; https://doi.org/10.3390/math13091441 - 28 Apr 2025
Viewed by 50
Abstract
When addressing constrained multi-objective optimization problems (CMOPs), the key challenge lies in achieving a balance between the objective functions and the constraint conditions. However, existing evolutionary algorithms exhibit certain limitations when tackling CMOPs with complex feasible regions. To address this issue, this paper [...] Read more.
When addressing constrained multi-objective optimization problems (CMOPs), the key challenge lies in achieving a balance between the objective functions and the constraint conditions. However, existing evolutionary algorithms exhibit certain limitations when tackling CMOPs with complex feasible regions. To address this issue, this paper proposes a constrained multi-objective evolutionary algorithm based on a dual-population cooperative correlation (CMOEA-DCC). Under the CMOEA-DDC framework, the system maintains two independently evolving populations: the driving population and the conventional population. These two populations share information through a collaborative interaction mechanism, where the driving population focuses on objective optimization, while the conventional population balances both objectives and constraints. To further enhance the performance of the algorithm, a shift-based density estimation (SDE) method is introduced to maintain the diversity of solutions in the driving population, while a multi-criteria evaluation metric is adopted to improve the feasibility quality of solutions in the normal population. CMOEA-DDC was compared with seven representative constrained multi-objective evolutionary algorithms (CMOEAs) across various test problems and real-world application scenarios. Through an in-depth analysis of a series of experimental results, it can be concluded that CMOEA-DDC significantly outperforms the other competing algorithms in terms of performance. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop