Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = semi-definite programming model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 978 KB  
Article
Computable Reformulation of Data-Driven Distributionally Robust Chance Constraints: Validated by Solution of Capacitated Lot-Sizing Problems
by Hua Deng and Zhong Wan
Mathematics 2026, 14(2), 331; https://doi.org/10.3390/math14020331 - 19 Jan 2026
Viewed by 97
Abstract
Uncertainty in optimization models often causes awkward properties in their deterministic equivalent formulations (DEFs), even for simple linear models. Chance-constrained programming is a reasonable tool for handling optimization problems with random parameters in objective functions and constraints, but it assumes that the distribution [...] Read more.
Uncertainty in optimization models often causes awkward properties in their deterministic equivalent formulations (DEFs), even for simple linear models. Chance-constrained programming is a reasonable tool for handling optimization problems with random parameters in objective functions and constraints, but it assumes that the distribution of these random parameters is known, and its DEF is often associated with the complicated computation of multiple integrals, hence impeding its extensive applications. In this paper, for optimization models with chance constraints, the historical data of random model parameters are first exploited to construct an adaptive approximate density function by incorporating piecewise linear interpolation into the well-known histogram method, so as to remove the assumption of a known distribution. Then, in view of this estimation, a novel confidence set only involving finitely many variables is constructed to depict all the potential distributions for the random parameters, and a computable reformulation of data-driven distributionally robust chance constraints is proposed. By virtue of such a confidence set, it is proven that the deterministic equivalent constraints are reformulated as several ordinary constraints in line with the principles of the distributionally robust optimization approach, without the need to solve complicated semi-definite programming problems, compute multiple integrals, or solve additional auxiliary optimization problems, as done in existing works. The proposed method is further validated by the solution of the stochastic multiperiod capacitated lot-sizing problem, and the numerical results demonstrate that: (1) The proposed method can significantly reduce the computational time needed to find a robust optimal production strategy compared with similar ones in the literature; (2) The optimal production strategy provided by our method can maintain moderate conservatism, i.e., it has the ability to achieve a better trade-off between cost-effectiveness and robustness than existing methods. Full article
(This article belongs to the Section D: Statistics and Operational Research)
Show Figures

Figure 1

33 pages, 570 KB  
Review
From PNP to Practice: Description Complexity and Certificate-First Algorithm Discovery for Hard Problems
by John Abela, Ernest Cachia and Colin Layfield
Mathematics 2026, 14(1), 41; https://doi.org/10.3390/math14010041 - 22 Dec 2025
Viewed by 658
Abstract
The celebrated question of whether P=NP continues to define the boundary between the feasible and the intractable in computer science. In this paper, we revisit the problem from two complementary angles: Time-Relative Description Complexity and automated discovery, adopting an [...] Read more.
The celebrated question of whether P=NP continues to define the boundary between the feasible and the intractable in computer science. In this paper, we revisit the problem from two complementary angles: Time-Relative Description Complexity and automated discovery, adopting an epistemic rather than ontological perspective. Even if polynomial-time algorithms for NP-complete problems do exist, their minimal descriptions may have very high Kolmogorov complexity. This creates what we call an epistemic barrier, making such algorithms effectively undiscoverable by unaided human reasoning. A series of structural results—relativization, Natural Proofs, and the Probabilistically Checkable Proofs (PCPs) theorem—already indicate that classical proof techniques are unlikely to resolve the question, which motivates a more pragmatic shift in emphasis. We therefore ask a different, more practical question: what can systematic computational search achieve within these limits? We propose a certificate-first workflow for algorithmic discovery, in which candidate algorithms are considered scientifically credible only when accompanied by machine-checkable evidence. Examples include Deletion/Resolution Asymmetric Tautology (DRAT)/Flexible RAT (FRAT) proof logs for SAT, Linear Programming (LP)/Semidefinite Programming (SDP) dual bounds for optimization, and other forms of independently verifiable certificates. Within this framework, high-capacity search and learning systems can explore algorithmic spaces far beyond manual (human) design, yet still produce artifacts that are auditable and reproducible. Empirical motivation comes from large language models and other scalable learning systems, where increasing capacity often yields new emergent behaviors even though internal representations remain opaque. This paper is best described as a position and expository essay that synthesizes insights from complexity theory, Kolmogorov complexity, and automated algorithm discovery, using Time-Relative Description Complexity as an organising lens and outlining a pragmatic research direction grounded in verifiable computation. We argue for a shift in emphasis from the elusive search for polynomial-time solutions to the constructive pursuit of high-performance heuristics and approximation methods grounded in verifiable evidence. The overarching message is that capacity plus certification offers a principled path toward better algorithms and clearer scientific limits without presuming a final resolution of P=?NP. Full article
(This article belongs to the Special Issue AI, Machine Learning and Optimization)
Show Figures

Figure 1

20 pages, 781 KB  
Article
Interplanetary Mission Performance Assessment of a TANDEM Electric Thruster-Based Spacecraft
by Alessandro A. Quarta
Appl. Sci. 2025, 15(21), 11711; https://doi.org/10.3390/app152111711 - 2 Nov 2025
Viewed by 551
Abstract
The aim of this paper is to analyze the transfer performance of a spacecraft equipped with a TANDEM electric propulsion system in a classical interplanetary mission scenario targeting Mars, Venus, or a near-Earth asteroid. The TANDEM concept is a coaxial, two-channel Hall-effect thruster [...] Read more.
The aim of this paper is to analyze the transfer performance of a spacecraft equipped with a TANDEM electric propulsion system in a classical interplanetary mission scenario targeting Mars, Venus, or a near-Earth asteroid. The TANDEM concept is a coaxial, two-channel Hall-effect thruster recently proposed under ESA’s Technology Development Element program. This innovative propulsion system, currently undergoing experimental characterization, is designed to operate at power levels between 3kW and 25kW, delivering a maximum thrust of approximately 1N. Its architecture allows operation using a single channel (internal or external) or both channels simultaneously to achieve maximum thrust. This inherent flexibility enables the definition of advanced control strategies for future missions employing such a propulsion system. In the context of a heliocentric mission scenario, this paper adopts a simplified thrust model based on actual thruster characteristics and a semi-analytical model for spacecraft mass breakdown. Transfer performance is evaluated within an optimization framework in terms of time of flight and the corresponding propellant mass consumption as functions of the main spacecraft design parameters. Full article
(This article belongs to the Special Issue Advances in Deep Space Probe Navigation: 2nd Edition)
Show Figures

Figure 1

21 pages, 1158 KB  
Article
Day-Ahead Coordinated Reactive Power Optimization Dispatching Based on Semidefinite Programming
by Binbin Xu, Mengqi Liu, Yilin Zhong, Peijie Cong, Bo Zhu, Tao Liu, Yujun Li and Zhengchun Du
Energies 2025, 18(20), 5469; https://doi.org/10.3390/en18205469 - 17 Oct 2025
Viewed by 354
Abstract
With access to new energy sources, the problem of reactive power optimization and dispatching has become increasingly important for research. However, the reactive power optimization problem is a mixed integer nonlinear optimization problem. In order to solve the integer variables and nonlinear conditions [...] Read more.
With access to new energy sources, the problem of reactive power optimization and dispatching has become increasingly important for research. However, the reactive power optimization problem is a mixed integer nonlinear optimization problem. In order to solve the integer variables and nonlinear conditions existing therein, a method for coordinated reactive power optimization and dispatching based on semidefinite programming is proposed. Firstly, a reactive power optimization model considering discrete variables and continuous variables is established with the minimization of total operating cost as the objective function; secondly, the discrete variables are transformed into equality constraints by quadratic equations, and then a solvable semi-definite programming problem is obtained; thirdly, the rank-one constraint is restored by the Iterative Optimization based Gaussian Randomization Method (IOGRM), and the optimal solution equivalent to the original problem is obtained. Finally, the correctness and effectiveness of the proposed model and solution method are verified by analyzing and comparing with the second-order cone programming (SOCP) through the modified IEEE standard example. Full article
Show Figures

Figure 1

22 pages, 733 KB  
Article
Optimal Innovation-Based Deception Attacks on Multi-Channel Cyber–Physical Systems
by Xinhe Yang, Zhu Ren, Jingquan Zhou and Jing Huang
Electronics 2025, 14(8), 1569; https://doi.org/10.3390/electronics14081569 - 12 Apr 2025
Viewed by 820
Abstract
This article addresses the optimal scheduling problem for linear deception attacks in multi-channel cyber–physical systems. The scenario where the attacker can only attack part of the channels due to energy constraints is considered. The effectiveness and stealthiness of attacks are quantified using state [...] Read more.
This article addresses the optimal scheduling problem for linear deception attacks in multi-channel cyber–physical systems. The scenario where the attacker can only attack part of the channels due to energy constraints is considered. The effectiveness and stealthiness of attacks are quantified using state estimation error and Kullback–Leibler divergence, respectively. Unlike existing strategies relying on zero-mean Gaussian distributions, we propose a generalized attack model with Gaussian distributions characterized by time-varying means. Based on this model, an optimal stealthy attack strategy is designed to maximize remote estimation error while ensuring stealthiness. By analyzing correlations among variables in the objective function, the solution is decomposed into a semi-definite programming problem and a 0–1 programming problem. This approach yields the modified innovation and an attack scheduling matrix. Finally, numerical simulations validate the theoretical results. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

19 pages, 558 KB  
Article
Optimization of Robust and Secure Transmit Beamforming for Dual-Functional MIMO Radar and Communication Systems
by Zhuochen Chen, Ximin Li and Shengqi Zhu
Remote Sens. 2025, 17(5), 816; https://doi.org/10.3390/rs17050816 - 26 Feb 2025
Cited by 1 | Viewed by 1653
Abstract
This paper investigates a multi-antenna, multi-input multi-output (MIMO) dual-functional radar and communication (DFRC) system platform. The system simultaneously detects radar targets and communicates with downlink cellular users. However, the modulated information within the transmitted waveforms may be susceptible to eavesdropping. To ensure the [...] Read more.
This paper investigates a multi-antenna, multi-input multi-output (MIMO) dual-functional radar and communication (DFRC) system platform. The system simultaneously detects radar targets and communicates with downlink cellular users. However, the modulated information within the transmitted waveforms may be susceptible to eavesdropping. To ensure the security of information transmission, we introduce non-orthogonal multiple access (NOMA) technology to enhance the security performance of the MIMO-DFRC platform. Initially, we consider a scenario where the channel state information (CSI) of the radar target (eavesdropper) is perfectly known. Using fractional programming (FP) and semidefinite relaxation (SDR) techniques, we maximize the system’s total secrecy rate under the requirements for radar detection performance, communication rate, and system energy, thereby ensuring the security of the system. In the case where the CSI of the radar target (eavesdropper) is unavailable, we propose a robust secure beamforming optimization model. The channel model is represented as a bounded uncertainty set, and by jointly applying first-order Taylor expansion and the S-procedure, we transform the original problem into a tractable one characterized by linear matrix inequalities (LMIs). Numerical results validate the effectiveness and robustness of the proposed approach. Full article
Show Figures

Figure 1

21 pages, 783 KB  
Article
Robust Beamfocusing for Secure NFC with Imperfect CSI
by Weijian Chen, Zhiqiang Wei and Zai Yang
Sensors 2025, 25(4), 1240; https://doi.org/10.3390/s25041240 - 18 Feb 2025
Cited by 3 | Viewed by 1795
Abstract
In this paper, we consider the issue of the physical layer security (PLS) problem between two nodes, i.e., transmitter (Alice) and receiver (Bob), in the presence of an eavesdropper (Eve) in a near-field communication (NFC) system. Notably, massive multiple-input multiple-output (MIMO) arrays significantly [...] Read more.
In this paper, we consider the issue of the physical layer security (PLS) problem between two nodes, i.e., transmitter (Alice) and receiver (Bob), in the presence of an eavesdropper (Eve) in a near-field communication (NFC) system. Notably, massive multiple-input multiple-output (MIMO) arrays significantly increase array aperture, thereby rendering the eavesdroppers more inclined to lurk near the transmission end. This situation necessitates using near-field channel models to more accurately describe channel characteristics. We consider two schemes with imperfect channel estimation information (CSI). The first scheme involves a conventional multiple-input multiple-output multiple-antenna eavesdropper (MIMOME) setup, where Alice simultaneously transmits information signal and artificial noise (AN). In the second scheme, Bob operates in a full-duplex (FD) mode, with Alice transmitting information signal while Bob emits AN. We then jointly design beamforming and AN vectors to degrade the reception signal quality at Eve, based on the signal-to-interference-plus-noise ratio (SINR) of each node. To tackle the power minimization problem, we propose an iterative algorithm that includes an additional constraint to ensure adherence to specified quality-of-service (QoS) metrics. Additionally, we decompose the robust optimization problem of the two schemes into two sub-problems, with one that can be solved using generalized Rayleigh quotient methods and the other that can be addressed through semi-definite programming (SDP). Finally, our simulation results confirm the viability of the proposed approach and demonstrate the effectiveness of the protection zone for NFC systems operating with CSI. Full article
(This article belongs to the Special Issue Secure Communication for Next-Generation Wireless Networks)
Show Figures

Figure 1

20 pages, 827 KB  
Article
Compound Optimum Designs for Clinical Trials in Personalized Medicine
by Belmiro P. M. Duarte, Anthony C. Atkinson, David Pedrosa and Marlena van Munster
Mathematics 2024, 12(19), 3007; https://doi.org/10.3390/math12193007 - 26 Sep 2024
Cited by 1 | Viewed by 1260
Abstract
We consider optimal designs for clinical trials when response variance depends on treatment and covariates are included in the response model. These designs are generalizations of Neyman allocation, and commonly employed in personalized medicine where external covariates linearly affect the response. Very often, [...] Read more.
We consider optimal designs for clinical trials when response variance depends on treatment and covariates are included in the response model. These designs are generalizations of Neyman allocation, and commonly employed in personalized medicine where external covariates linearly affect the response. Very often, these designs aim at maximizing the amount of information gathered but fail to assure ethical requirements. We analyze compound optimal designs that maximize a criterion weighting the amount of information and the reward of allocating the patients to the most effective/least risky treatment. We develop a general representation for static (a priori) allocation and propose a semidefinite programming (SDP) formulation to support their numerical computation. This setup is extended assuming the variance and the parameters of the response of all treatments are unknown and an adaptive sequential optimal design scheme is implemented and used for demonstration. Purely information theoretic designs for the same allocation have been addressed elsewhere, and we use them to support the techniques applied to compound designs. Full article
Show Figures

Figure 1

23 pages, 2176 KB  
Article
Robust Liu Estimator Used to Combat Some Challenges in Partially Linear Regression Model by Improving LTS Algorithm Using Semidefinite Programming
by Waleed B. Altukhaes, Mahdi Roozbeh and Nur A. Mohamed
Mathematics 2024, 12(17), 2787; https://doi.org/10.3390/math12172787 - 9 Sep 2024
Cited by 5 | Viewed by 1419
Abstract
Outliers are a common problem in applied statistics, together with multicollinearity. In this paper, robust Liu estimators are introduced into a partially linear model to combat the presence of multicollinearity and outlier challenges when the error terms are not independent and some linear [...] Read more.
Outliers are a common problem in applied statistics, together with multicollinearity. In this paper, robust Liu estimators are introduced into a partially linear model to combat the presence of multicollinearity and outlier challenges when the error terms are not independent and some linear constraints are assumed to hold in the parameter space. The Liu estimator is used to address the multicollinearity, while robust methods are used to handle the outlier problem. In the literature on the Liu methodology, obtaining the best value for the biased parameter plays an important role in model prediction and is still an unsolved problem. In this regard, some robust estimators of the biased parameter are proposed based on the least trimmed squares (LTS) technique and its extensions using a semidefinite programming approach. Based on a set of observations with a sample size of n, and the integer trimming parameter hn, the LTS estimator computes the hyperplane that minimizes the sum of the lowest h squared residuals. Even though the LTS estimator is statistically more effective than the widely used least median squares (LMS) estimate, it is less complicated computationally than LMS. It is shown that the proposed robust extended Liu estimators perform better than classical estimators. As part of our proposal, using Monte Carlo simulation schemes and a real data example, the performance of robust Liu estimators is compared with that of classical ones in restricted partially linear models. Full article
(This article belongs to the Special Issue Nonparametric Regression Models: Theory and Applications)
Show Figures

Figure 1

10 pages, 1531 KB  
Article
Quantifying Quantum Coherence Using Machine Learning Methods
by Lin Zhang, Liang Chen, Qiliang He and Yeqi Zhang
Appl. Sci. 2024, 14(16), 7312; https://doi.org/10.3390/app14167312 - 20 Aug 2024
Cited by 3 | Viewed by 3357
Abstract
Quantum coherence is a crucial resource in numerous quantum processing tasks. The robustness of coherence provides an operational measure of quantum coherence, which can be calculated for various states using semidefinite programming. However, this method depends on convex optimization and can be time-intensive, [...] Read more.
Quantum coherence is a crucial resource in numerous quantum processing tasks. The robustness of coherence provides an operational measure of quantum coherence, which can be calculated for various states using semidefinite programming. However, this method depends on convex optimization and can be time-intensive, especially as the dimensionality of the space increases. In this study, we employ machine learning techniques to quantify quantum coherence, focusing on the robustness of coherence. By leveraging artificial neural networks, we developed and trained models for systems with different dimensionalities. Testing on data samples shows that our approach substantially reduces computation time while maintaining strong generalizability. Full article
(This article belongs to the Topic Quantum Information and Quantum Computing, 2nd Volume)
Show Figures

Figure 1

11 pages, 3277 KB  
Article
A Terminal Residual Vibration Suppression Method of a Robot Based on Joint Trajectory Optimization
by Liang Liang, Chengdong Wu and Shichang Liu
Machines 2024, 12(8), 537; https://doi.org/10.3390/machines12080537 - 6 Aug 2024
Cited by 2 | Viewed by 1875
Abstract
Vibration problems have become one of the most important factors affecting robot performance. To this end, a terminal residual vibration suppression method based on joint trajectory optimization is proposed to improve the accuracy and stability of robot motion. Firstly, based on the characteristics [...] Read more.
Vibration problems have become one of the most important factors affecting robot performance. To this end, a terminal residual vibration suppression method based on joint trajectory optimization is proposed to improve the accuracy and stability of robot motion. Firstly, based on the characteristics of the friction nonlinearity due to joint coupling and physical feasibility of dynamic parameters, a semidefinite programming method is used to identify dynamic parameters with actual physical meaning, thereby obtaining an accurate dynamic model. Then, based on the result of the residual vibration time domain analysis, a joint trajectory optimization model with the goal of minimizing joint tracking error is established. The Chebyshev collocation method is used to discretize the optimization model. The dynamic model is used as the optimization constraint, and barycentric interpolation is used to obtain the optimized joint motion trajectory. Finally, industrial robot experiments prove that the vibration suppression method proposed in this article can reduce the maximum acceleration amplitude of residual vibration by 62% and the vibration duration by 71%. Compared with the input shaping method, the method proposed in this paper can reduce the terminal residual vibration more effectively and ensure the consistency of running time and trajectory. Full article
Show Figures

Figure 1

21 pages, 596 KB  
Article
Enhanced Moving Source Localization with Time and Frequency Difference of Arrival: Motion-Assisted Method for Sub-Dimensional Sensor Networks
by Xu Yang
Appl. Sci. 2024, 14(9), 3909; https://doi.org/10.3390/app14093909 - 3 May 2024
Cited by 1 | Viewed by 2347
Abstract
Localizing a moving source by Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) commonly requires at least N+1 sensors in N-dimensional space to obtain more than N pairs of TDOAs and FDOAs, thereby establishing more than [...] Read more.
Localizing a moving source by Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) commonly requires at least N+1 sensors in N-dimensional space to obtain more than N pairs of TDOAs and FDOAs, thereby establishing more than 2N equations to solve for 2N unknowns. However, if there are insufficient sensors, the localization problem will become underdetermined, leading to non-unique solutions or inaccuracies in the minimum norm solution. This paper proposes a localization method using TDOAs and FDOAs while incorporating the motion model. The motion between the source and sensors increases the equivalent length of the baseline, thereby improving observability even when using the minimum number of sensors. The problem is formulated as a Maximum Likelihood Estimation (MLE) and solved through Gauss–Newton (GN) iteration. Since GN requires an initialization close to the true value, the MLE is transformed into a semidefinite programming problem using Semidefinite Relaxation (SDR) technology, while SDR results in a suboptimal estimate, it is sufficient as an initialization to guarantee the convergence of GN iteration. The proposed method is analytically shown to reach the Cramér–Rao Lower Bound (CRLB) accuracy under mild noise conditions. Simulation results confirm that it achieves CRLB-level performance when the number of sensors is lower than N+1, thereby corroborating the theoretical analysis. Full article
(This article belongs to the Special Issue Recent Progress in Radar Target Detection and Localization)
Show Figures

Figure 1

17 pages, 4328 KB  
Article
An Underwater Source Localization Method Using Bearing Measurements
by Peijuan Li, Yiting Liu, Tingwu Yan, Shutao Yang and Rui Li
Sensors 2024, 24(5), 1627; https://doi.org/10.3390/s24051627 - 1 Mar 2024
Cited by 8 | Viewed by 1871
Abstract
Angle-of-arrival (AOA) measurements are often used in underwater acoustical localization. Different from the traditional AOA model based on azimuth and elevation measurements, the AOA model studied in this paper uses bearing measurements. It is also often used in the Ultra-Short Baseline system (USBL). [...] Read more.
Angle-of-arrival (AOA) measurements are often used in underwater acoustical localization. Different from the traditional AOA model based on azimuth and elevation measurements, the AOA model studied in this paper uses bearing measurements. It is also often used in the Ultra-Short Baseline system (USBL). However, traditional acoustical localization needs additional range information. If the range information is unavailable, the closed-form solution is difficult to obtain only with bearing measurements. Thus, a localization closed-form solution using only bearing measurements is explored in this article. A pseudo-linear measurement model between the source position and the bearing measurements is derived, and considering the nonlinear relationship of the parameters, a weighted least-squares optimization equation based on multiple constraints is established. Different from the traditional two-step least-squares method, the semidefinite programming (SDP) method is designed to obtain the initial solution, and then a bias compensation method is proposed to further minimize localization errors based on the SDP result. Numerical simulations show that the performance of the proposed method can achieve Cramer–Rao lower bound (CRLB) accuracy. The field test also proves that the proposed method can locate the source position without range measurements and obtain the highest positioning accuracy. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

19 pages, 661 KB  
Article
Multi-Objective Battery Coordination in Distribution Networks to Simultaneously Minimize CO2 Emissions and Energy Losses
by Oscar Danilo Montoya, Luis Fernando Grisales-Noreña and Walter Gil-González
Sustainability 2024, 16(5), 2019; https://doi.org/10.3390/su16052019 - 29 Feb 2024
Cited by 9 | Viewed by 1554
Abstract
The techno–environmental analysis of distributed energy resources in electrical distribution networks is a complex optimization task due to the non-convexities of its nonlinear programming formulation. This research employs convex optimization to address this issue while minimizing the expected carbon dioxide emissions and daily [...] Read more.
The techno–environmental analysis of distributed energy resources in electrical distribution networks is a complex optimization task due to the non-convexities of its nonlinear programming formulation. This research employs convex optimization to address this issue while minimizing the expected carbon dioxide emissions and daily energy losses of a distribution grid via the optimal dispatch of battery energy storage units (BESUs) and renewable energy units (REUs). The exact non-convex model is approximated via semi-definite programming in the complex variable domain. The optimal Pareto front is constructed using a weighting-based optimization approach. Numerical results using an IEEE 69-bus grid confirm the effectiveness of our proposal when considering unitary and variable power factor operation for the BESUs and the REUs. All numerical simulations were carried out using MATLAB software (version 2022b), a convex disciplined tool (CVX), and the semi-definite programming solvers SEDEUMI and SDPT3. Full article
(This article belongs to the Special Issue Smart Grid Optimization and Sustainable Power System Management)
Show Figures

Figure 1

21 pages, 420 KB  
Article
Jointly Active/Passive Beamforming Optimization for Intelligent-Reflecting Surface-Assisted Cognitive-IoT Networks
by Yanping Zhou, Fang Deng and Shidang Li
Electronics 2024, 13(2), 299; https://doi.org/10.3390/electronics13020299 - 9 Jan 2024
Cited by 7 | Viewed by 1825
Abstract
To overcome challenges such as limited energy availability for terminal devices, constrained network coverage, and suboptimal spectrum resource utilization, with the overarching objective of establishing a sustainable and efficient interconnection infrastructure, we introduce an innovative Intelligent Reflective Surface (IRS) technology. This cutting-edge IRS [...] Read more.
To overcome challenges such as limited energy availability for terminal devices, constrained network coverage, and suboptimal spectrum resource utilization, with the overarching objective of establishing a sustainable and efficient interconnection infrastructure, we introduce an innovative Intelligent Reflective Surface (IRS) technology. This cutting-edge IRS technology is employed to architect a wireless and energy-efficient cognitive secure communication network assisted by IRS. To further optimize the overall energy harvesting of this network, we present a cognitive secure resource allocation scheme, aiming to maximize the system’s total collected energy. This scheme carefully considers various constraints, including transmission power constraints for cognitive base stations, power constraints for jammer devices, interference limitations for all primary users, minimum security rate constraints for all cognitive Internet of Things (IoT) devices, and phase shift constraints for IRS. We establish a comprehensive hybrid cognitive secure resource allocation model, encompassing joint cognitive transmission beam design, jammer device transmission beam design, and phase shift design. Given the non-convex nature of the formulated problem and the intricate coupling relationships among variables, we devise an effective block coordinate descent (BCD) iterative algorithm. The realization of joint cognitive/jammer base station transmission beam design and phase shift design employs sophisticated techniques such as continuous convex approximation methods and semi-definite programming. Simulation results underscore the superior performance of the proposed scheme compared to existing resource allocation approaches, particularly in terms of total harvested energy and other critical metrics. Full article
Show Figures

Figure 1

Back to TopTop