Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (27)

Search Parameters:
Keywords = Lagrangian Dual Function

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 11326 KB  
Article
Constrained Soft Actor–Critic for Joint Computation Offloading and Resource Allocation in UAV-Assisted Edge Computing
by Nawazish Muhammad Alvi, Waqas Muhammad Alvi, Xiaolong Zhou, Jun Li and Yifei Wei
Sensors 2026, 26(4), 1149; https://doi.org/10.3390/s26041149 - 10 Feb 2026
Viewed by 607
Abstract
Unmanned Aerial Vehicle (UAV)-assisted edge computing supports latency-sensitive applications by offloading computational tasks to ground-based servers. However, determining optimal resource allocation under strict latency constraints and stochastic channel conditions remains challenging. This paper addresses the joint computation partitioning and power allocation problem for [...] Read more.
Unmanned Aerial Vehicle (UAV)-assisted edge computing supports latency-sensitive applications by offloading computational tasks to ground-based servers. However, determining optimal resource allocation under strict latency constraints and stochastic channel conditions remains challenging. This paper addresses the joint computation partitioning and power allocation problem for UAV-assisted edge computing systems. We formulate the problem as a Constrained Markov Decision Process (CMDP) that explicitly models latency constraints, rather than relying on implicit reward shaping. To solve this CMDP, we propose Constrained Soft Actor–Critic (C-SAC), a deep reinforcement learning algorithm that combines maximum-entropy policy optimization with Lagrangian dual methods. C-SAC employs a dedicated constraint critic network to estimate long-term constraint violations and an adaptive Lagrange multiplier that automatically balances energy efficiency against latency satisfaction without manual tuning. Extensive experiments demonstrate that C-SAC achieves an 18.9% constraint violation rate. This represents a 60.6-percentage-point improvement compared to unconstrained Soft Actor–Critic, with 79.5%, and a 22.4-percentage-point improvement over deterministic TD3-Lagrangian, achieving 41.3%. The learned policies exhibit strong channel-adaptive behavior with a correlation coefficient of 0.894 between the local computation ratio and channel quality, despite the absence of explicit channel modeling in the reward function. Ablation studies confirm that both adaptive mechanisms are essential, while sensitivity analyses show that C-SAC maintains robust performance with violation rates varying by less than 2 percentage points even as channel variability triples. These results establish constrained reinforcement learning as an effective approach for reliable UAV edge computing under stringent quality-of-service requirements. Full article
(This article belongs to the Special Issue Communications and Networking Based on Artificial Intelligence)
Show Figures

Figure 1

27 pages, 4986 KB  
Article
DI-WOA: Symmetry-Aware Dual-Improved Whale Optimization for Monetized Cloud Compute Scheduling with Dual-Rollback Constraint Handling
by Yuanzhe Kuang, Zhen Zhang and Hanshen Li
Symmetry 2026, 18(2), 303; https://doi.org/10.3390/sym18020303 - 6 Feb 2026
Viewed by 292
Abstract
With the continuous growth in the scale of engineering simulation and intelligent manufacturing workflows, more and more problem-solving tasks are migrating to cloud computing platforms to obtain elastic computing power. However, a core operational challenge for cloud platforms lies in the difficulty of [...] Read more.
With the continuous growth in the scale of engineering simulation and intelligent manufacturing workflows, more and more problem-solving tasks are migrating to cloud computing platforms to obtain elastic computing power. However, a core operational challenge for cloud platforms lies in the difficulty of stably obtaining high-quality scheduling solutions that are both efficient and free of symmetric redundancy, due to the coupling of multiple constraints, partial resource interchangeability, inconsistent multi-objective evaluation scales, and heterogeneous resource fluctuations. To address this, this paper proposes a Dual-Improved Whale Optimization Algorithm (DI-WOA) accompanied by a modeling framework featuring discrete–continuous divide-and-conquer modeling, a unified monetization mechanism of the objective function, and separation of soft/hard constraints; its iterative trajectory follows an augmented Lagrangian dual-rollback mechanism, while being rooted in a three-layer “discrete gene–real-valued encoding–decoder” structure. Scalability experiments show that as the number of tasks J increases, the DI-WOA ranks optimal or sub-optimal at most scale points, indicating its effectiveness in reducing unified billing costs even under intensified task coupling and resource contention. Ablation experiment results demonstrate that the complete DI-WOA achieves final objective values (OBJ) 8.33%, 5.45%, and 13.31% lower than the baseline, the variant without dual update (w/o dual), and the variant without perturbation (w/o perturb), respectively, significantly enhancing convergence performance and final solution quality on this scheduling model. In robustness experiments, the DI-WOA exhibits the lowest or second-lowest OBJ and soft constraint violation, indicating higher controllability under perturbations. In multi-workload generalization experiments, the DI-WOA achieves the optimal or sub-optimal mean OBJ across all scenarios with H = 3/4, leading the sub-optimal algorithm by up to 13.85%, demonstrating good adaptability to workload variations. A comprehensive analysis of the experimental results reveals that the DI-WOA holds practical significance for stably solving high-quality scheduling problems that are efficient and free of symmetric redundancy in complex and diverse environments. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

33 pages, 3714 KB  
Article
SADQN-Based Residual Energy-Aware Beamforming for LoRa-Enabled RF Energy Harvesting for Disaster-Tolerant Underground Mining Networks
by Hilary Kelechi Anabi, Samuel Frimpong and Sanjay Madria
Sensors 2026, 26(2), 730; https://doi.org/10.3390/s26020730 - 21 Jan 2026
Viewed by 273
Abstract
The end-to-end efficiency of radio-frequency (RF)-powered wireless communication networks (WPCNs) in post-disaster underground mine environments can be enhanced through adaptive beamforming. The primary challenges in such scenarios include (i) identifying the most energy-constrained nodes, i.e., nodes with the lowest residual energy to prevent [...] Read more.
The end-to-end efficiency of radio-frequency (RF)-powered wireless communication networks (WPCNs) in post-disaster underground mine environments can be enhanced through adaptive beamforming. The primary challenges in such scenarios include (i) identifying the most energy-constrained nodes, i.e., nodes with the lowest residual energy to prevent the loss of tracking and localization functionality; (ii) avoiding reliance on the computationally intensive channel state information (CSI) acquisition process; and (iii) ensuring long-range RF wireless power transfer (LoRa-RFWPT). To address these issues, this paper introduces an adaptive and safety-aware deep reinforcement learning (DRL) framework for energy beamforming in LoRa-enabled underground disaster networks. Specifically, we develop a Safe Adaptive Deep Q-Network (SADQN) that incorporates residual energy awareness to enhance energy harvesting under mobility, while also formulating a SADQN approach with dual-variable updates to mitigate constraint violations associated with fairness, minimum energy thresholds, duty cycle, and uplink utilization. A mathematical model is proposed to capture the dynamics of post-disaster underground mine environments, and the problem is formulated as a constrained Markov decision process (CMDP). To address the inherent NP hardness of this constrained reinforcement learning (CRL) formulation, we employ a Lagrangian relaxation technique to reduce complexity and derive near-optimal solutions. Comprehensive simulation results demonstrate that SADQN significantly outperforms all baseline algorithms: increasing cumulative harvested energy by approximately 11% versus DQN, 15% versus Safe-DQN, and 40% versus PSO, and achieving substantial gains over random beamforming and non-beamforming approaches. The proposed SADQN framework maintains fairness indices above 0.90, converges 27% faster than Safe-DQN and 43% faster than standard DQN in terms of episodes, and demonstrates superior stability, with 33% lower performance variance than Safe-DQN and 66% lower than DQN after convergence, making it particularly suitable for safety-critical underground mining disaster scenarios where reliable energy delivery and operational stability are paramount. Full article
Show Figures

Figure 1

32 pages, 3675 KB  
Article
Gibbs Quantum Fields Computed by Action Mechanics Recycle Emissions Absorbed by Greenhouse Gases, Optimising the Elevation of the Troposphere and Surface Temperature Using the Virial Theorem
by Ivan R. Kennedy, Migdat Hodzic and Angus N. Crossan
Thermo 2025, 5(3), 25; https://doi.org/10.3390/thermo5030025 - 22 Jul 2025
Viewed by 1442
Abstract
Atmospheric climate science lacks the capacity to integrate thermodynamics with the gravitational potential of air in a classical quantum theory. To what extent can we identify Carnot’s ideal heat engine cycle in reversible isothermal and isentropic phases between dual temperatures partitioning heat flow [...] Read more.
Atmospheric climate science lacks the capacity to integrate thermodynamics with the gravitational potential of air in a classical quantum theory. To what extent can we identify Carnot’s ideal heat engine cycle in reversible isothermal and isentropic phases between dual temperatures partitioning heat flow with coupled work processes in the atmosphere? Using statistical action mechanics to describe Carnot’s cycle, the maximum rate of work possible can be integrated for the working gases as equal to variations in the absolute Gibbs energy, estimated as sustaining field quanta consistent with Carnot’s definition of heat as caloric. His treatise of 1824 even gave equations expressing work potential as a function of differences in temperature and the logarithm of the change in density and volume. Second, Carnot’s mechanical principle of cooling caused by gas dilation or warming by compression can be applied to tropospheric heat–work cycles in anticyclones and cyclones. Third, the virial theorem of Lagrange and Clausius based on least action predicts a more accurate temperature gradient with altitude near 6.5–6.9 °C per km, requiring that the Gibbs rotational quantum energies of gas molecules exchange reversibly with gravitational potential. This predicts a diminished role for the radiative transfer of energy from the atmosphere to the surface, in contrast to the Trenberth global radiative budget of ≈330 watts per square metre as downwelling radiation. The spectral absorptivity of greenhouse gas for surface radiation into the troposphere enables thermal recycling, sustaining air masses in Lagrangian action. This obviates the current paradigm of cooling with altitude by adiabatic expansion. The virial-action theorem must also control non-reversible heat–work Carnot cycles, with turbulent friction raising the surface temperature. Dissipative surface warming raises the surface pressure by heating, sustaining the weight of the atmosphere to varying altitudes according to latitude and seasonal angles of insolation. New predictions for experimental testing are now emerging from this virial-action hypothesis for climate, linking vortical energy potential with convective and turbulent exchanges of work and heat, proposed as the efficient cause setting the thermal temperature of surface materials. Full article
Show Figures

Figure 1

37 pages, 33539 KB  
Article
Domain-Separated Quantum Neural Network for Truss Structural Analysis with Mechanics-Informed Constraints
by Hyeonju Ha, Sudeok Shon and Seungjae Lee
Biomimetics 2025, 10(6), 407; https://doi.org/10.3390/biomimetics10060407 - 16 Jun 2025
Cited by 1 | Viewed by 1380
Abstract
This study proposes an index-based quantum neural network (QNN) model, built upon a variational quantum circuit (VQC), as a surrogate framework for the static analysis of truss structures. Unlike coordinate-based models, the proposed QNN uses discrete member and node indices as inputs, and [...] Read more.
This study proposes an index-based quantum neural network (QNN) model, built upon a variational quantum circuit (VQC), as a surrogate framework for the static analysis of truss structures. Unlike coordinate-based models, the proposed QNN uses discrete member and node indices as inputs, and it adopts a separate-domain strategy that partitions the structure for parallel training. This architecture reflects the way nature organizes and optimizes complex systems, thereby enhancing both flexibility and scalability. Independent quantum circuits are assigned to each separate domain, and a mechanics-informed loss function based on the force method is formulated within a Lagrangian dual framework to embed physical constraints directly into the training process. As a result, the model achieves high prediction accuracy and fast convergence, even under complex structural conditions with relatively few parameters. Numerical experiments on 2D and 3D truss structures show that the QNN reduces the number of parameters by up to 64% compared to conventional neural networks, while achieving higher accuracy. Even within the same QNN architecture, the separate-domain approach outperforms the single-domain model with a 6.25% reduction in parameters. The proposed index-based QNN model has demonstrated practical applicability for structural analysis and shows strong potential as a quantum-based numerical analysis tool for future applications in building structure optimization and broader engineering domains. Full article
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2025)
Show Figures

Figure 1

27 pages, 624 KB  
Article
Convex Optimization of Markov Decision Processes Based on Z Transform: A Theoretical Framework for Two-Space Decomposition and Linear Programming Reconstruction
by Shiqing Qiu, Haoyu Wang, Yuxin Zhang, Zong Ke and Zichao Li
Mathematics 2025, 13(11), 1765; https://doi.org/10.3390/math13111765 - 26 May 2025
Cited by 6 | Viewed by 2428
Abstract
This study establishes a novel mathematical framework for stochastic maintenance optimization in production systems by integrating Markov decision processes (MDPs) with convex programming theory. We develop a Z-transformation-based dual-space decomposition method to reconstruct MDPs into a solvable linear programming form, resolving the inherent [...] Read more.
This study establishes a novel mathematical framework for stochastic maintenance optimization in production systems by integrating Markov decision processes (MDPs) with convex programming theory. We develop a Z-transformation-based dual-space decomposition method to reconstruct MDPs into a solvable linear programming form, resolving the inherent instability of traditional models caused by uncertain initial conditions and non-stationary state transitions. The proposed approach introduces three mathematical innovations: (i) a spectral clustering mechanism that reduces state-space dimensionality while preserving Markovian properties, (ii) a Lagrangian dual formulation with adaptive penalty functions to handle operational constraints, and (iii) a warm start algorithm accelerating convergence in high-dimensional convex optimization. Theoretical analysis proves that the derived policy achieves stability in probabilistic transitions through martingale convergence arguments, demonstrating structural invariance to initial distributions. Experimental validations on production processes reveal that our model reduces long-term maintenance costs by 36.17% compared to Monte Carlo simulations (1500 vs. 2350 average cost) and improves computational efficiency by 14.29% over Q-learning methods. Sensitivity analyses confirm robustness across Weibull-distributed failure regimes (shape parameter β [1.2, 4.8]) and varying resource constraints. Full article
(This article belongs to the Special Issue Markov Chain Models and Applications: Latest Advances and Prospects)
Show Figures

Figure 1

36 pages, 22818 KB  
Article
Index-Based Neural Network Framework for Truss Structural Analysis via a Mechanics-Informed Augmented Lagrangian Approach
by Hyeonju Ha, Sudeok Shon and Seungjae Lee
Buildings 2025, 15(10), 1753; https://doi.org/10.3390/buildings15101753 - 21 May 2025
Viewed by 1777
Abstract
This study proposes an Index-Based Neural Network (IBNN) framework for the static analysis of truss structures, employing a Lagrangian dual optimization technique grounded in the force method. A truss is a discrete structural system composed of linear members connected to nodes. Despite their [...] Read more.
This study proposes an Index-Based Neural Network (IBNN) framework for the static analysis of truss structures, employing a Lagrangian dual optimization technique grounded in the force method. A truss is a discrete structural system composed of linear members connected to nodes. Despite their geometric simplicity, analysis of large-scale truss systems requires significant computational resources. The proposed model simplifies the input structure and enhances the scalability of the model using member and node indices as inputs instead of spatial coordinates. The IBNN framework approximates member forces and nodal displacements using separate neural networks and incorporates structural equations derived from the force method as mechanics-informed constraints within the loss function. Training was conducted using the Augmented Lagrangian Method (ALM), which improves the convergence stability and learning efficiency through a combination of penalty terms and Lagrange multipliers. The efficiency and accuracy of the framework were numerically validated using various examples, including spatial trusses, square grid-type space frames, lattice domes, and domes exhibiting radial flow characteristics. Multi-index mapping and domain decomposition techniques contribute to enhanced analysis performance, yielding superior prediction accuracy and numerical stability compared to conventional methods. Furthermore, by reflecting the structured and discrete nature of structural problems, the proposed framework demonstrates high potential for integration with next-generation neural network models such as Quantum Neural Networks (QNNs). Full article
Show Figures

Figure 1

22 pages, 335 KB  
Article
Non-Minimal Einstein–Dirac-Axion Theory: Spinorization of the Early Universe Induced by Curvature
by Alexander B. Balakin and Anna O. Efremova
Symmetry 2025, 17(5), 663; https://doi.org/10.3390/sym17050663 - 27 Apr 2025
Cited by 1 | Viewed by 1050
Abstract
A new non-minimal version of the Einstein–Dirac-axion theory is established. This version of the non-minimal theory describing the interaction of gravitational, spinor, and axion fields is of the second order in derivatives in the context of the Effective Field Theory and is of [...] Read more.
A new non-minimal version of the Einstein–Dirac-axion theory is established. This version of the non-minimal theory describing the interaction of gravitational, spinor, and axion fields is of the second order in derivatives in the context of the Effective Field Theory and is of the first order in the spinor particle number density. The model Lagrangian contains four parameters of non-minimal coupling and includes, in addition to the Riemann tensor, Ricci tensor, and Ricci scalar, as well as left-dual and right-dual curvature tensors. The pseudoscalar field appears in the Lagrangian in terms of trigonometric functions providing the discrete symmetry associated with axions, which is supported. The coupled system of extended master equations for the gravitational, spinor, and axion fields is derived; the structure of new non-minimal sources that appear in these master equations is discussed. Application of the established theory to the isotropic homogeneous cosmological model is considered; new exact solutions are presented for a few model sets of guiding non-minimal parameters. A special solution is presented, which describes an exponential growth of the spinor number density; this solution shows that spinor particles (massive fermions and massless neutrinos) can be born in the early Universe due to the non-minimal interaction with the spacetime curvature. Full article
(This article belongs to the Special Issue Symmetry: Feature Papers 2025)
31 pages, 8127 KB  
Article
Data-Driven Kinematic Model for the End-Effector Pose Control of a Manipulator Robot
by Josué Goméz-Casas, Carlos A. Toro-Arcila, Nelly Abigaíl Rodríguez-Rosales, Jonathan Obregón-Flores, Daniela E. Ortíz-Ramos, Jesús Fernando Martínez-Villafañe and Oziel Gómez-Casas
Processes 2024, 12(12), 2831; https://doi.org/10.3390/pr12122831 - 10 Dec 2024
Cited by 3 | Viewed by 2768
Abstract
This paper presents a data-driven kinematic model for the end-effector pose control applied to a variety of manipulator robots, focusing on the entire end-effector’s pose (position and orientation). The measured signals of the full pose and their computed derivatives, along with a linear [...] Read more.
This paper presents a data-driven kinematic model for the end-effector pose control applied to a variety of manipulator robots, focusing on the entire end-effector’s pose (position and orientation). The measured signals of the full pose and their computed derivatives, along with a linear combination of an estimated Jacobian matrix and a vector of joint velocities, generate a model estimation error. The Jacobian matrix is estimated using the Pseudo Jacobian Matrix (PJM) algorithm, which requires tuning only the step and weight parameters that scale the convergence of the model estimation error. The proposed control law is derived in two stages: the first one is part of an objective function minimization, and the second one is a constraint in a quasi-Lagrangian function. The control design parameters guarantee the control error convergence in a closed-loop configuration with adaptive behavior in terms of the dynamics of the estimated Jacobian matrix. The novelty of the approach lies in its ability to achieve superior tracking performance across different manipulator robots, validated through simulations. Quantitative results show that, compared to a classical inverse-kinematics approach, the proposed method achieves rapid convergence of performance indices (e.g., Root Mean Square Error (RMSE) reduced to near-zero in two cycles vs. a steady-state RMSE of 20 in the classical approach). Additionally, the proposed method minimizes joint drift, maintaining an RMSE of approximately 0.3 compared to 1.5 under the classical scheme. The control was validated by means of simulations featuring an UR5e manipulator with six Degrees of Freedom (DOF), a KUKA Youbot with eight DOF, and a KUKA Youbot Dual with thirteen DOF. The stability analysis of the closed-loop controller is demonstrated by means of the Lyapunov stability conditions. Full article
Show Figures

Figure 1

35 pages, 2011 KB  
Article
Decomposition and Symmetric Kernel Deep Neural Network Fuzzy Support Vector Machine
by Karim El Moutaouakil, Mohammed Roudani, Azedine Ouhmid, Anton Zhilenkov and Saleh Mobayen
Symmetry 2024, 16(12), 1585; https://doi.org/10.3390/sym16121585 - 27 Nov 2024
Cited by 4 | Viewed by 1990
Abstract
Algorithms involving kernel functions, such as support vector machine (SVM), have attracted huge attention within the artificial learning communities. The performance of these algorithms is greatly influenced by outliers and the choice of kernel functions. This paper introduces a new version of SVM [...] Read more.
Algorithms involving kernel functions, such as support vector machine (SVM), have attracted huge attention within the artificial learning communities. The performance of these algorithms is greatly influenced by outliers and the choice of kernel functions. This paper introduces a new version of SVM named Deep Decomposition Neural Network Fuzzy SVM (DDNN-FSVM). To this end, we consider an auto-encoder (AE) deep neural network with three layers: input, hidden, and output. Unusually, the AE’s hidden layer comprises a number of neurons greater than the dimension of the input samples, which guarantees linear data separation. The encoder operator is then introduced into the FSVM’s dual to map the training samples to high-dimension spaces. To learn the support vectors and autoencoder parameters, we introduce the loss function and regularization terms in the FSVM dual. To learn from large-scale data, we decompose the resulting model into three small-dimensional submodels using Lagrangian decomposition. To solve the resulting problems, we use SMO, ISDA, and SCG for optimization problems involving large-scale data. We demonstrate that the optimal values of the three submodels solved in parallel provide a good lower bound for the optimal value of the initial model. In addition, thanks to its use of fuzzy weights, DDNN-FSVM is resistant to outliers. Moreover, DDNN-FSVM simultaneously learns the appropriate kernel function and separation path. We tested DDNN-FSVM on several well-known digital and image datasets and compared it to well-known classifiers on the basis of accuracy, precision, f-measure, g-means, and recall. On average, DDNN-FSVM improved on the performance of the classic FSVM across all datasets and outperformed several well-known classifiers. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 773 KB  
Article
A Momentum-Based Adaptive Primal–Dual Stochastic Gradient Method for Non-Convex Programs with Expectation Constraints
by Rulei Qi, Dan Xue and Yujia Zhai
Mathematics 2024, 12(15), 2393; https://doi.org/10.3390/math12152393 - 31 Jul 2024
Cited by 1 | Viewed by 1740
Abstract
In this paper, we propose a stochastic primal-dual adaptive method based on an inexact augmented Lagrangian function to solve non-convex programs, referred to as the SPDAM. Different from existing methods, SPDAM incorporates adaptive step size and momentum-based search directions, which improve the convergence [...] Read more.
In this paper, we propose a stochastic primal-dual adaptive method based on an inexact augmented Lagrangian function to solve non-convex programs, referred to as the SPDAM. Different from existing methods, SPDAM incorporates adaptive step size and momentum-based search directions, which improve the convergence rate. At each iteration, an inexact augmented Lagrangian subproblem is solved to update the primal variables. A post-processing step is designed to adjust the primal variables to meet the accuracy requirement, and the adjusted primal variable is used to compute the dual variable. Under appropriate assumptions, we prove that the method converges to the ε-KKT point of the primal problem, and a complexity result of SPDAM less than O(ε112) is established. This is better than the most famous O(ε6) result. The numerical experimental results validate that this method outperforms several existing methods with fewer iterations and a lower running time. Full article
(This article belongs to the Special Issue Stochastic System Analysis and Control)
Show Figures

Figure 1

28 pages, 17751 KB  
Article
An Effective Arbitrary Lagrangian-Eulerian-Lattice Boltzmann Flux Solver Integrated with the Mode Superposition Method for Flutter Prediction
by Tianchi Gong, Feng Wang and Yan Wang
Appl. Sci. 2024, 14(9), 3939; https://doi.org/10.3390/app14093939 - 5 May 2024
Cited by 2 | Viewed by 2571
Abstract
An arbitrary Lagrangian-Eulerian lattice Boltzmann flux solver (ALE-LBFS) coupled with the mode superposition method is proposed in this work and applied to study two- and three-dimensional flutter phenomenon on dynamic unstructured meshes. The ALE-LBFS is applied to predict the flow field by using [...] Read more.
An arbitrary Lagrangian-Eulerian lattice Boltzmann flux solver (ALE-LBFS) coupled with the mode superposition method is proposed in this work and applied to study two- and three-dimensional flutter phenomenon on dynamic unstructured meshes. The ALE-LBFS is applied to predict the flow field by using the vertex-centered finite volume method with an implicit dual time-stepping method. The convective fluxes are evaluated by using lattice Boltzmann solutions of the non-free D1Q4 lattice model and the viscous fluxes are obtained directly. Additional fluxes due to mesh motion are calculated directly by using local conservative variables and mesh velocity. The mode superposition method is used to solve for the dynamic response of solid structures. The exchange of aerodynamic forces and structural motions is achieved through interpolation with the radial basis function. The flow solver and the structural solver are tightly coupled so that the restriction on the physical time step can be removed. In addition, geometric conservation law (GCL) is also applied to guarantee conservation laws. The proposed method is tested through a series of simulations about moving boundaries and fluid–structure interaction problems in 2D and 3D. The present results show good consistency against the experiments and numerical simulations obtained from the literature. It is also shown that the proposed method not only can effectively predict the flutter boundaries in both 2D and 3D cases but can also accurately capture the transonic dip phenomenon. The tight coupling of the ALE-LBFS and the mode superposition method presents an effective and powerful tool for flutter prediction and can be applied to many essential aeronautical problems. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

26 pages, 7722 KB  
Article
A New Lagrangian Problem Crossover—A Systematic Review and Meta-Analysis of Crossover Standards
by Aso M. Aladdin and Tarik A. Rashid
Systems 2023, 11(3), 144; https://doi.org/10.3390/systems11030144 - 9 Mar 2023
Cited by 17 | Viewed by 4687
Abstract
The performance of most evolutionary metaheuristic algorithms relies on various operators. The crossover operator is a standard based on population-based algorithms, which is divided into two types: application-dependent and application-independent crossover operators. In the process of optimization, these standards always help to select [...] Read more.
The performance of most evolutionary metaheuristic algorithms relies on various operators. The crossover operator is a standard based on population-based algorithms, which is divided into two types: application-dependent and application-independent crossover operators. In the process of optimization, these standards always help to select the best-fit point. The high efficiency of crossover operators allows engineers to minimize errors in engineering application optimization while saving time and avoiding overpricing. There are two crucial objectives behind this paper; first, we provide an overview of the crossover standards classification that has been used by researchers for solving engineering operations and problem representation. This paper proposes a novel standard crossover based on the Lagrangian Dual Function (LDF) to enhance the formulation of the Lagrangian Problem Crossover (LPX). The LPX for 100 generations of different pairs parent chromosomes is compared to Simulated Binary Crossover (SBX) standards and Blended Crossover (BX) for real-coded crossovers. Three unimodal test functions with various random values show that LPX has better performance in most cases and comparative results in other cases. Moreover, the LPB algorithm is used to compare LPX with SBX, BX, and Qubit Crossover (Qubit-X) operators to demonstrate accuracy and performance during exploitation evaluations. Finally, the proposed crossover stand operator results are demonstrated, proved, and analyzed statistically by the Wilcoxon signed-rank sum test. Full article
Show Figures

Figure 1

17 pages, 321 KB  
Article
Duality Results for a Class of Constrained Robust Nonlinear Optimization Problems
by Savin Treanţă and Tareq Saeed
Mathematics 2023, 11(1), 192; https://doi.org/10.3390/math11010192 - 29 Dec 2022
Cited by 5 | Viewed by 1510
Abstract
In this paper, we establish various results of duality for a new class of constrained robust nonlinear optimization problems. For this new class of problems, involving functionals of (path-independent) curvilinear integral type and mixed constraints governed by partial derivatives of second order and [...] Read more.
In this paper, we establish various results of duality for a new class of constrained robust nonlinear optimization problems. For this new class of problems, involving functionals of (path-independent) curvilinear integral type and mixed constraints governed by partial derivatives of second order and uncertain data, we formulate and study Wolfe, Mond-Weir and mixed type robust dual optimization problems. In this regard, by considering the concept of convex curvilinear integral vector functional, determined by controlled second-order Lagrangians including uncertain data, and the notion of robust weak efficient solution associated with the considered problem, we create a new mathematical context to state and prove the duality theorems. Furthermore, an illustrative application is presented. Full article
(This article belongs to the Special Issue Variational Problems and Applications, 2nd Edition)
18 pages, 4080 KB  
Article
Magnetotelluric Regularized Inversion Based on the Multiplier Method
by Deshan Feng, Xuan Su, Xun Wang, Siyuan Ding, Cen Cao, Shuo Liu and Yi Lei
Minerals 2022, 12(10), 1230; https://doi.org/10.3390/min12101230 - 28 Sep 2022
Cited by 2 | Viewed by 2612
Abstract
Magnetotellurics (MT) is an important geophysical method for resource exploration and mineral evaluation. As a direct and effective form of data interpretation, MT inversion is usually considered to be a penalty-function constraint-based optimization strategy. However, conventional MT inversion involves a large number of [...] Read more.
Magnetotellurics (MT) is an important geophysical method for resource exploration and mineral evaluation. As a direct and effective form of data interpretation, MT inversion is usually considered to be a penalty-function constraint-based optimization strategy. However, conventional MT inversion involves a large number of calculations in penalty terms and causes difficulties in selecting exact regularization factors. For this reason, we propose a multiplier-based MT inversion scheme, which is implemented by introducing the incremental Lagrangian function. In this case, it can avoid the exact solution of the primal-dual subproblem in the penalty function and further reduce the sensitivity of the regularization factors, thus achieving the goal of improving the convergence efficiency and accelerating the optimization calculation of the inverse algorithm. In this study, two models were used to verify the performance of the multiplier method in the regularized MT inversion. The first experiment, with an undulating two-layer model of metal ore, verified that the multiplier method could effectively avoid the MT inversion falling into local minimal. The second experiment, with a wedge model, showed that the multiplier method has strong robustness, due to which it can expand the selection range and reduce the difficulty of the regularization factors. We tested the feasibility of the multiplier method in field data. We compared the results of the multiplier method with those of conventional inversion methods in order to verify the accuracy of the multiplier method. Full article
(This article belongs to the Special Issue Electromagnetic Exploration: Theory, Methods and Applications)
Show Figures

Figure 1

Back to TopTop