Next Article in Journal
Object Detection YOLO Algorithms and Their Industrial Applications: Overview and Comparative Analysis
Previous Article in Journal
Asymmetry Elliptical Likelihood Potential Field for Real-Time Three-Dimensional Collision Avoidance in Industrial Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Safety Control for Cyber–Physical Systems Under False Data Injection Attacks

1
College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin 150040, China
2
Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(6), 1103; https://doi.org/10.3390/electronics14061103
Submission received: 6 February 2025 / Revised: 2 March 2025 / Accepted: 4 March 2025 / Published: 11 March 2025
(This article belongs to the Section Systems & Control Engineering)

Abstract

:
Cyber–physical systems (CPSs) are increasingly susceptible to cyber threats, especially false data injection (FDI) attacks, which can compromise their stability and safety. Ensuring system safety while mitigating such attacks is a critical challenge. In the paper, we address the safety control issue for CPSs by designing a control strategy that considers both false data injection attacks and physical safety constraints. A baseline controller is first designed to guarantee system stability when there are no attacks and when the safety constraints are satisfied. To address FDI attacks, we propose a neural network-based estimator to detect and estimate the magnitude of such attacks. The attack estimate is then incorporated into the controller to dynamically adjust control actions, ensuring that the system remains stable and resilient to malicious interference. Furthermore, we introduce a safety control algorithm based on control barrier functions to enforce safety constraints, where the attack estimate is integrated to handle unknown attacks. Finally, the effectiveness of the proposed scheme is validated by simulation results, demonstrating that the combined control strategy outperforms traditional methods in both attack mitigation and safety enforcement.

1. Introduction

In the context of the close integration of networks and advanced science and technology, various network-enabled experimental physical devices have become more convenient to use and functionally enhanced through the benefits of seamless network communication [1,2,3]. However, due to the vulnerability of cyberspace, these systems are exposed to external network attacks. At the same time, many cyber–physical devices are compromised due to the openness of their connection channels with cyberspace, resulting in severe consequences [4,5,6,7]. To counter such malicious external attacks, researchers have conducted extensive studies in this field to protect cyber–physical systems (CPSs) [8,9,10,11,12,13]. However, although significant attention has been given to ensuring the security of cyber–physical systems, research that simultaneously addresses both physical safety and cyber security remains limited. For instance, although the system stability may be maintained under network attacks, the system may fail to operate within the desired range. Therefore, this paper proposes a control strategy that simultaneously ensures both the security and safety properties of the system.
As a research methodology that has garnered significant attention since its inception, the neural network (NN) has demonstrated its critical importance in various fields, thanks to its powerful estimation and prediction capabilities [14]. In the context of uncertain systems, their ability to estimate variables within a system has been extensively studied, yielding numerous achievements, particularly in the analysis of nonlinear systems [15,16,17]. This method can also be employed to address the security challenges of CPSs. False data injection (FDI) attacks are employed by external adversaries to maliciously tamper with systems. These attacks are often unknown to defenders, posing a significant threat to CPSs [18]. Fortunately, such attacks can be effectively addressed by using appropriately designed neural networks [19,20,21,22,23,24]. In [19], using physical knowledge and advanced machine learning techniques, the authors extract key features from noisy data to detect attack signals. The authors in [20] developed an intelligent estimator using NN to estimate attacks, while adopting a variable structure control approach to compensate for attack and system performance. Based on a practical application of vehicles equipped with cooperative adaptive cruise control, the authors proposed a novel controller and observer to estimate FDI attacks, where attacks and noise were addressed through the NN method in [23]. Furthermore, the simultaneous exploration of security and safety control problems in [24] highlights a significant yet underexplored topic, which also serves as one of the motivations for this paper. The authors chose connected and automated vehicles as the research object and proposed a safe and secure controller that fully utilized deep NN prediction and model-based estimations. However, most existing defense strategies that integrate neural networks to counter network attacks are based on continuous systems. In contrast, research on discrete systems remains relatively limited [25,26,27]. To address this gap, this paper investigates discrete systems, employing neural networks to estimate cyberattacks and manage unknown variables.
When dealing with systems that require strict operational constraints, ensuring security at the network level alone is often insufficient. For instance, in unmanned aerial vehicle (UAV) control, the safety of the UAV is crucial, as unreasonable system states can lead to damage or even crashes [28]. Similarly, in autonomous vehicle control, exceeding the designated driving range can result in collisions between vehicles [24,29]. Therefore, ensuring the physical safety of systems is critical, and the control barrier function (CBF) serves as an effective mathematical tool to constrain system states within a predefined range. Integrating the Lyapunov function ensures that the control inputs achieve both stability and safety while maintaining the feasibility of the control problem [30]. Numerous practical applications have demonstrated that this function is an effective method for addressing safety issues in CPSs [31,32,33,34]. However, most studies focus on deterministic systems and overlook certain uncertainties, such as disturbances and attacks. To handle the time-varying disturbance, applying the first-order time derivative of the CBF, a high-gain input observer estimated the system dynamics in [35]. In [36], the authors presented two schemes to synthesize the observer and controller through input-to-state stable observers and bounded error observers, respectively. According to observation error, extra constraints were exposed to the CBF, which promoted a novel CBF that can deal with the uncertain disturbance. However, when the system is subjected to false data injection, maintaining safety becomes challenging. This is not only because network attacks are more intelligent than disturbances but also because systems that lose stability under network attacks find it difficult to address safety concerns at all.
We propose a control scheme to address the currently insufficiently handled cybersecurity issues in CPSs. The approach leverages the estimation capabilities of neural networks to construct state estimators and estimate uncertain attack signals within the system. Additionally, it integrates CBFs to avoid safety issues in practical applications. Table 1 describes the distinctions between this paper and some relevant works. The key contributions are summarized as follows.
(1)
A NN estimator is devised to approximate the system state when the system suffers from an unknown FDI attack. This estimator facilitates the construction of a controller that can compensate for the FDI attack while addressing the uncertain variable in safety constraints.
(2)
By integrating NN estimation and CBF, a control strategy is proposed to simultaneously alleviate the FDI attack and ensure the safety of CPSs.
The remaining part is arranged below. In Section 2, some preliminaries and unresolved issues are presented for the subsequent content. In Section 3, the main results are proposed. In Section 4, we apply a physical model to demonstrate the designed strategy. In Section 5, we conclude the paper.
Notations: The symbols used in this paper follow standard notations. The Euclidean space of dimension R n represents a real-valued vector space of n dimensions. R m × n refers to the matrices with dimensions m × n . The transpose is denoted by the superscript ⊤. The 2-norm of a vector x is defined as x = x x . σ ( · ) denotes the eigenvalue of a matrix. tr { · } represents the trace of a matrix.

2. Problem Formulation and Preliminaries

This section mainly describes the problem of interest and preliminary knowledge, including the system model and attack signal. The FDI attack addressed in this paper is illustrated in Figure 1, where the attacker targets the channel between the actuator and the controller. The proposed response strategy is also depicted in Figure 1. When the FDI attack affects the system, we design a NN estimator to estimate the attack, which then transmits the estimated data to the controller. The controller then uses the estimated attack signal to compensate for the actual attack and feeds the signal back to the estimator. The estimator updates the estimated data using the gradient descent method, adjusting the update law to improve the estimation accuracy. Therefore, we mitigate the impact of the FDI attack.
Consider the following discrete model:
x ( k + 1 ) = A x ( k ) + B u ( k ) ,
where x ( k ) R n is the system state, u ( k ) R m is the input vector, and A and B are system matrices possessing proper dimensions. For this system, we adopt the following controller:
u ( k ) = K x ( k ) ,
where feedback gain K is deduced by the Riccati equation and the LQR method [40].
Then, consider the FDI attack transmitting through the actuator and the controller, and the secure system is changed to
x ( k + 1 ) = A x ( k ) + B u ( k ) + B u a ( k ) ,
where u a ( k ) is an unknown attack signal. In such a situation, the expected control state may be tempered, potentially compromising the system stability. Facing the threat of unknown malicious attacks, an estimation and compensation approach are applied to handle this issue combined with the NN method. Specifically, the NN method is employed to estimate the unknown attack, while also being used for the construction of the estimation. Then, a new controller is subsequently redesigned to address FDI attacks.
To effectively estimate the unknown attack, the NN method is adopted here. NN is a highly powerful tool, renowned for its exceptional estimation capabilities, making it suitable for approximating various unknown functions [41], including those related to network attacks. To elaborate, the FDI attack described in this paper is given below:
u a ( k ) = W 𝟊 ( V x ( k ) ) + ρ ( k ) ,
where W and V are the expected weights, 𝟊 ( · ) represents the activation function, and ρ ( · ) is the estimation error. The assumption below is made to facilitate subsequent derivations.
Assumption 1 
([42]). The expected weight, activation function, and approximation error are bounded, namely, W W , 𝟊 ( V x ( k ) ) 𝟊 M , and ρ ( x ( k ) ) ρ M , where W , 𝟊 M , and ρ M are constants.
Remark 1. 
Regarding Assumption 1, which asserts that the expected weight, activation function, and approximation error are bounded, this is a standard assumption in the literature as seen in [43,44]. The typical FDI attack is bounded in energy because the attacker is constrained by their available resources and capabilities in reality. A bounded activation function, such as the Sigmoid function used in this paper, exists. Therefore, to satisfy the boundedness of the FDI attack, the assumption about the expected weight is reasonable. Additionally, the assumption regarding the approximation error is also reasonable.
Safety is another critical element considered in this paper. The safety set D X R n can be defined below:
D : = { x ( k ) X h ( x ( k ) ) 0 } ,
I n t ( D ) : = { x ( k ) X h ( x ( k ) ) > 0 } ,
D : = { x ( k ) X h ( x ( k ) ) = 0 } ,
where h : R n R is a smooth function. The safety set D possesses the property of the forward invariant if x ( k ) D , k N for every x ( 0 ) D [45]. And the CBF can be employed to guarantee the safety of CPSs regarding the set D . The CBF used in this paper is defined below.
Definition 1. 
(Discrete-Time Exponential Control Barrier Function (DECBF) [46]). For the aforementioned set D and smooth function h ( x ( k ) ) , h ( x ( k ) ) is a DECBF if it satisfies the following conditions:
1. 
h ( x ( 0 ) ) 0 ,
2. 
Δ h ( x ( k ) ) γ h ( x ( k ) ) , k Z + , 0 < γ 1 ,
where Δ h ( x ( k ) ) = h ( x ( k + 1 ) ) h ( x ( k ) ) .
Grounded in the foundational concepts of the target system and the selected methods, the subsequent sections will specifically address the proposed issues: mitigating the FDI attack to ensure system security and resolving the safety challenges posed by the system’s unknown variables.

3. Main Results

This section describes the main contributions. A NN estimator is utilized to approximate the system state under the FDI attack and facilitate the estimation of the attack occurring in the system (3), which is a key step to mitigate the attack. Then, an adaptive controller is proposed based on the LQR method and NN to address the attack, and the relevant sufficient conditions are deduced. Additionally, considering the problem of system safety, an approach to handle the unknown variable presented in the DECBF is provided. Finally, a control scheme that integrates all the above components is proposed.

3.1. Adaptive Controller Design

For the system (3), this paper aims to mitigate the impact of malicious attacks to make the corrupt system state approach the normal system state (1). Under such a scheme, an estimator is constructed by the following form:
x ^ ( k + 1 ) = A x ^ ( k ) + B u ( k ) + B u ^ a ( k ) ,
where x ^ ( k ) R n is the estimation state, and u ^ a ( k ) is the estimated cyber attack. By applying the NN method, the above u ^ a ( k ) is designed below:
u ^ a ( k ) = W ^ ( k ) 𝟊 ( V x ^ ( k ) )
with the known estimated state x ^ ( k ) , where W ^ ( k ) is the actual weight. If the actual weight approaches the ideal weight W, the estimated attack u ^ a ( k ) can approximately replace the actual attack u a ( k ) . For the sake of convenience, V x ^ ( k ) is expressed as x ¯ ^ ( k ) .
When u ^ a ( k ) is obtained, a new controller can be developed by adding the estimated attack into the controller (2) to offset the attack occurring in the system, namely, the adaptive controller is constructed as follows:
u ¯ ( k ) = K x ( k ) u ^ a ( k ) .
We can obtain
x ( k + 1 ) = ( A B K ) x ( k ) B u ^ a ( k ) + B u a ( k ) ,
and the estimator changes to
x ^ ( k + 1 ) = ( A B K ) x ^ ( k ) .
Remark 2. 
The developed estimator differs from a typical observer, as its components do not include the system output vector. When the adaptive controller substitutes for the original one, the estimator only consists of the estimated state and system matrices. Assume that the initial estimated state is set equal to the actual initial state. In that case, the estimated system state matches the actual system state, unaffected by the attack signal. Based on this property, the estimator can serve as an ideal state reference, enabling weight updates that are crucial for improving the accuracy of the attack estimation.
Introducing the weight error W ˜ = W W ^ ( k ) results in
x ( k + 1 ) = ( A B K ) x ( k ) + B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) + B W ( 𝟊 ( V x ( k ) ) 𝟊 ( x ¯ ^ ( k ) ) + ρ ( k ) ) .
Combining Equations (12) and (13), the state error e ( k + 1 ) = x ( k + 1 ) x ^ ( k + 1 ) is obtained:
e ( k + 1 ) = ( A B K ) e ( k ) + B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) + B W ( 𝟊 ( V x ( k ) ) 𝟊 ( x ¯ ^ ( k ) ) + ρ ( k ) ) .
Ensuring estimation accuracy in the NN method can involve various weight update approaches. This paper employs gradient descent for weight updates, with the update law defined as
W ^ ( k + 1 ) = W ^ ( k ) η J ( k ) W ^ ( k ) ,
where η ( 0 , 1 ) is the learning rate, and J ( k ) = 1 2 e ( k + 1 ) e ( k + 1 ) , namely, the actual weight is updated by the state error. Substituting the specific form of J ( k ) into (15) yields
W ^ ( k + 1 ) = W ^ ( k ) + η 𝟊 ( x ¯ ^ ( k ) ) ( A e ( k ) + B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) + B ψ ( k ) ) B ,
where A = A + B K , ψ ( k ) = W ( 𝟊 ( V x ( k ) ) 𝟊 ( x ¯ ^ ( k ) ) + ρ ( k ) ) . Noting that 𝟊 ( · ) and ρ ( · ) are bounded, ψ ( · ) is a bounded function, namely, ψ ( k ) ψ M . The weight error can be further derived:
W ˜ ( k + 1 ) = W ˜ ( k ) η 𝟊 ( x ¯ ^ ( k ) ) ( A e ( k ) + B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) + B ψ ( k ) ) B .
Remark 3. 
The main concept of the designed scheme is the application of the NN method to construct a novel controller that can compensate for the injected false data to maintain the security of the system. From (11), when the adaptive controller is introduced, the effect of attacks is transformed into the smaller bounded error between the actual attack and estimated attack, namely, the handling of the attack becomes the handling of the error that can be diminished by improving the accuracy of approximation. The adopted update law can solve the issue of weight update and ensure system stability, which will be demonstrated in the subsequent part.

3.2. System Stability Analysis

For the system (13), the adaptive controller counteracts the actual attack signal through the approximation of the NN method. Next, sufficient conditions for system stability are deduced.
Before starting the stability proof, the following inequalities are useful for this proof process.
x y α 2 x 2 + 1 2 α y 2 ,
σ m i n ( Ω ) X 2 < X Ω X < σ m a x ( Ω ) X 2 ,
| tr { C T D } | C D .
Theorem 1. 
Consider the system (13) with the unknown attack, which is handled by the state estimator and the NN method. The weight update law for the estimated attack signal (9) is given by (16). Then, the adaptive controller (10) guarantees the discrete system is uniformly ultimately bounded.
Proof. 
This proof process begins with
V ( k ) = κ V 1 + λ V 2 + μ V 3 ,
where V 1 = x ( k ) P 1 x ( k ) , V 2 = e ( k ) P 2 e ( k ) , and V 3 = tr { W ˜ ( k ) W ˜ ( k ) } . The matrices P 1 and P 2 are symmetric positive definite matrices, and constant parameters κ , λ , and μ will be defined later.
For the above equation, the difference of the first term is
Δ V 1 = x ( k + 1 ) P 1 x ( k + 1 ) x ( k ) P 1 x ( k ) = A x ( k ) + B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) + B ψ ( k ) P 1 × A x ( k ) + B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) + B ψ ( k ) x ( k ) P 1 x ( k ) .
According to the Cauchy–Schwartz inequality and Young’s inequality, the above equation can be rearranged into the inequality
Δ V 1 x ( k ) ( 2 E P 1 E P 1 ) x ( k ) + 4 ( B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) × P 1 ( B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) ) + 4 ( B ψ ( k ) ) P 1 ( B ψ ( k ) ) σ min ( Q 1 ) x ( k ) 2 + 4 B B P 1 tr { W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) × 𝟊 ( x ¯ ^ ( k ) ) W ˜ ( k ) } + 4 B B P 1 ψ M 2 ,
where Q 1 = 2 E P 1 E P 1 < 0 , namely, matrix P 1 needs to satisfy such a condition to yield that Q 1 is a positive definite matrix.
Substitute state error (14) into V 2 , and apply the similar derivation method. The inequality with regard to Δ V 2 can be deduced:
Δ V 2 σ min ( Q 2 ) e ( k ) 2 + 4 B B P 2 tr { W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) × 𝟊 ( x ¯ ^ ( k ) ) W ˜ ( k ) } + 4 B B P 2 ψ M 2 ,
where Q 2 = 2 E P 2 E P 2 < 0 , and the condition that P 2 need to satisfy is similar to P 1 .
Then, consider the difference of V 3 :
Δ V 3 = tr { W ( k + 1 ) W ( k + 1 ) W ( k ) W ( k ) } = tr { 2 η W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) ( A e ( k ) + B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) + B ψ ( k ) ) B + η 2 B ( A e ( k ) + B ψ ( k ) + B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) ) 𝟊 ( x ¯ ^ ( k ) ) 𝟊 ( x ¯ ^ ( k ) ) ( A e ( k ) + B ψ ( k ) + B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) ) B } tr { η ( 1 ω 1 Γ 1 Γ 1 + ω 1 Γ 2 Γ 2 2 Γ 1 Γ 1 B B + 1 ω 2 Γ 1 Γ 1 + ω 2 Γ 3 Γ 3 ) + η 2 ( Γ 4 Γ 4 + 1 ω 3 Γ 5 Γ 5 + ω 3 Γ 4 Γ 4 + 1 ω 4 Γ 4 Γ 4 + ω 4 Γ 6 Γ 6 + Γ 5 Γ 5 + 1 ω 5 Γ 5 Γ 5 ) + ω 5 Γ 6 Γ 6 + Γ 6 Γ 6 } ,
where constant parameters ω 1 ,   ω 2 ,   ω 3 ,   ω 4 ,   ω 5 satisfy ω > 0 , and matrices Γ 1 ,   Γ 2 ,   Γ 3 ,   Γ 4 ,   Γ 5 ,   Γ 6 are defined below:
Γ 1 = W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) , Γ 2 = B A e ( k ) , Γ 3 = B B ψ ( k ) , Γ 4 = B A e ( k ) 𝟊 ( x ¯ ^ ( k ) ) , Γ 5 = B B W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) 𝟊 ( x ¯ ^ ( k ) ) , Γ 6 = B B ψ ( k ) 𝟊 ( x ¯ ^ ( k ) ) .
Applying the Cauchy–Schwartz inequality, the inequality (22) can be changed to
Δ V 3 β 1 tr { W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) 𝟊 ( x ¯ ^ ( k ) ) W ˜ ( k ) } + β 2 B A 2 e ( k ) 2 + β 3 B B 2 ψ ( k ) 2 ,
where β 1 ,   β 2 ,   β 3 are defined as follows:
β 1 = 2 η σ m i n ( B B ) 1 ω 1 η 1 ω 2 η 1 ω 3 B B 2 𝟊 M 2 η 2 B B 2 𝟊 M 2 η 2 1 ω 5 B B 2 𝟊 M 2 η 2 > 0 , β 2 = ω 1 η + 𝟊 M 2 η 2 + ω 3 𝟊 M 2 η 2 + 1 ω 4 𝟊 M 2 η 2 , β 3 = ω 2 η + ω 4 𝟊 M 2 η 2 + ω 5 𝟊 M 2 η 2 + 𝟊 M 2 η 2 .
Combining Δ V 1 , Δ V 2 , and Δ V 3 , the difference of (18) is deduced:
Δ V ( k ) = κ σ min ( Q 1 ) x ( k ) 2 + ( 4 κ B B P 1 + 4 λ B B P 2 μ β 1 ) tr { W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) 𝟊 ( x ¯ ^ ( k ) ) W ˜ ( k ) } + ( λ σ min ( Q 2 ) + μ β 2 B A 2 ) e ( k ) 2 + ( 4 κ B B P 1 ψ M 2 + 4 λ B B P 2 ψ M 2 + μ β 3 B B 2 ψ ( k ) 2 )
Then, the aforementioned constants κ , λ , and μ are defined as follows:
κ = 1 , λ = 9 β 2 B B P 1 B A 2 β 1 σ m i n ( Q 2 ) 8 β 2 B A 2 B B 2 P 2 , μ = 5 σ m i n ( Q 2 ) B B P 1 β 1 σ m i n ( Q 2 ) 8 β 2 B A 2 B B 2 P 2 .
In addition, β 1 and β 2 need to meet the following conditions for λ > 0 and μ > 0 to be satisfied.
β 1 σ m i n ( Q 2 ) > 8 β 2 B A 2 B B 2 P 2 .
Since β 1 and β 2 consist of predetermined constants, the relationship between β 1 and β 2 can be achieved by selecting the appropriate parameters.
Then, Δ V ( k ) is further deduced:
Δ V ( k ) = σ min ( Q 1 ) x ( k ) 2 Σ 1 tr { W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) 𝟊 ( x ¯ ^ ( k ) ) W ˜ ( k ) } Σ 2 e ( k ) 2 + Σ 3 ,
where
Σ 1 = β σ min ( Q 2 ) 4 β 2 B A 2 B B 2 P 2 , Σ 2 = 4 B B 2 P 2 B A 2 , Σ 3 = 4 κ B B P 1 ψ M 2 + 4 λ B B P 2 ψ M 2 , + μ β 3 B B 2 ψ ( k ) 2 .
It can be seen that Σ 1 and Σ 2 are two positive parameters, and Σ 3 > 0 is a bounded parameter. For Δ V ( k ) , provided the following conditions hold, Δ V ( k ) < 0 :
x ( k ) > Σ 3 σ m i n ( Q 1 ) , or e ( k ) > Σ 3 Σ 2 , or tr { W ˜ ( k ) 𝟊 ( x ¯ ^ ( k ) ) 𝟊 ( x ¯ ^ ( k ) ) W ˜ ( k ) } > Σ 3 Σ 1 .
Similarly, the right side of (27) is constants, so the condition (27) can be satisfied by choosing appropriate parameters.
Therefore, it is demonstrated that the system is uniformly ultimately bounded, thereby concluding the proof.    □
Remark 4. 
The constraints of matrices Q 1 and Q 2 are necessary for the derivation (27). If this condition cannot be satisfied, the proof of Theorem 1 may need to seek another solution. Similarly, the relationships among β 1 , β 2 , and β 3 need to be satisfied, which can be completed by choosing proper known parameters.

3.3. Control Barrier Function with the Unknown Parameter

For the system (11), the security is guaranteed through the adaptive controller, and safety is considered in this part. Firstly, h ( x ( k + 1 ) ) is described below:
h ( x ( k + 1 ) ) = h ( A x ( k ) + B u ^ ( k ) + B u a ( k ) ) = h ¯ ( x ( k + 1 ) ) + Ψ ( k + 1 ) ,
where h ¯ ( x ( k + 1 ) ) = h ( A x ( k ) + B u ^ ( k ) ) and Ψ ( k + 1 ) = h ( x ( k + 1 ) ) h ¯ ( x ( k + 1 ) ) . Since the unknown variable in the system (11) is the attack signal, Ψ ( k + 1 ) is a function about u a ( k ) . Using the estimation of attack signal to replace Ψ ( k + 1 ) , namely, Ψ ( k + 1 ) = Ψ ^ ( k + 1 ) + Δ , where Ψ ^ ( k + 1 ) is derived from Ψ ( k + 1 ) by substituting the unknown variable with its estimation, and Δ is the error. Assuming that h ( x ( k ) ) is selected appropriately such that Ψ ( k ) and Ψ ^ ( k ) are bounded under bounded variables u a ( k ) and u ^ a ( k ) , respectively, the error Δ can be determined as a bounded quantity. Choose a function with the suitable dimension ξ ( Δ ) 1 , and define the Δ Δ M . Then, the aforementioned second term of the CBF condition can be transformed into
h ^ ( x ( k + 1 ) ) + Ψ ^ ( k + 1 ) + ( γ 1 ) h ( x ( k ) ) ξ ( Δ ) Δ M , k Z + , 0 < γ 1
Remark 5. 
Due to the existence of the unknown variable in the discrete system, it is difficult to directly apply Definition 1 and solve the CBF problem. To address this unknown variable, we introduce the approximation and the discrepancy between the actual and estimated variables. Even though the error cannot be fully determined, the boundedness of the error allows this issue to be addressed by increasing conservatism, specifically by strengthening the constraints of the control barrier function. The complete determination of the error term and precise identification of its range are beyond the scope of this paper and will be explored in future research.

3.4. Secure and Safe Control Strategy

Combining CBF and the adaptive controller using the NN method, a new controller to simultaneously maintain the properties of security and safety is formulated as an optimization problem:
u * = min u 1 2 ( u u ref ) ( u u ref ) ,
s . t . h ^ ( x ( k + 1 ) ) + Ψ ^ ( k + 1 ) + ( γ 1 ) h ( x ( k ) ) ξ ( Δ ) Δ M ,
where u r e f is the expected controller. Through the training of the NN, an estimated weight W approximating the ideal weight W can be obtained. And the expected controller is u r e f ( k ) = K x ( k ) W 𝟊 ( x ¯ ^ ( k ) ) . Furthermore, we include an algorithm flowchart to illustrate the function of the CBF and how both the security and safety of the system are maintained under FDI attacks. Algorithm 1 is described below.
Algorithm 1 Secure and safe control strategy algorithm
1:
Initialize parameters: NN parameters, control gain K, etc.
2:
NN estimation training: (1) use NN to estimate the unknown FDI attack; (2) Attack compensation: integrate the estimated attack signal into the controller for compensating the attack; (3) Update the weight: compute the state error, and update the weight with gradient descent method; (4) continue the first step with the new weight.
3:
Ideal controller: Obtain a controller capable of mitigating the FDI attack through NN estimation in step 2.
4:
Control barrier function: according to the requirement, choose appropriate CBFs, and determine relevant parameters.
5:
Quadratic Optimal Control: transform the issue of secure and safe control into an optimal control problem, namely, Equation (30).
Remark 6. 
Since the controller (10) can ensure system stability, the Control Lyapunov Function with similar effects is not considered here. Moreover, the application of CBF satisfies the safety requirements, even in the presence of an unknown variable. Integrating the above two methods, the controller u * can simultaneously achieve secure and safe control.
Remark 7. 
With the help of the NN estimator, when the attack intrudes into the system, the estimator is able to estimate the unknown attack and transmit the data to the controller to mitigate the impact of the attack. However, the estimation of the attack requires some time, and no real-time requirements are imposed on the controller design in this paper. Therefore, the proposed scheme is not suitable for systems with stringent real-time requirements, which is a key area for improvement in future research. Nevertheless, for systems where real-time requirements are not as critical, the proposed scheme still demonstrates effectiveness.

4. Simulation Results

This section demonstrates the effectiveness of the designed control scheme using a UAV system model presented in [47]. To demonstrate the superiority of the designed scheme, a comparative experiment with the LQR method in [40] is conducted. MATLAB 2021b is used for the simulations. To present the through discretization with a sampling period of T s = 0.01 s, the discretized system parameters are given below:
A = 0.9949 0.0848 0.0205 0.0974 0 0.0002 0.9766 0.0099 0.0003 0 0.0005 0.1357 0.9943 0.0000 0 0.0000 0.0007 0.0100 1.0000 0 0.0008 0.2454 0.0000 0.2500 1.0000 , B = 0.0028 0.4071 0.0014 0.0000 0.1820 0.0001 0.0009 0.0000 0.0003 0.0002 .
For the gain K = ( R + B P B ) 1 B P A , it is obtained by the following Riccati equation:
0 = A T P A P + Q A T P B ( R + B T P B ) 1 B T P A ,
where Q and R are unit matrices with suitable dimensions, and K is calculated:
K = 0.0013 7.1504 1.0715 12.5094 0.8849 0.8065 0.3400 0.0331 0.6019 0.0903 .
The actual attack signal is given as
u a ( k ) = 2 s i n ( x 4 ( k ) ) 2 s i n ( x 2 ( k ) ) + 4 ,
where x 2 ( k ) and x 4 ( k ) represent the second and fourth rows of the matrix x ( k ) , respectively. In addition, the initial system state is x 0 = [ 4 ; 0.2 ; 3 ; 0.5 ; 2 ] , and the initial estimated state is chosen with the same values. In the process of NN approximation, the relevant parameters are chosen: the only hidden layer has 10 neurons, the initial weights are produced randomly in the range [ 0 , 1 ] , the activation function is Sigmoid, and the learning rate η is 0.75. The simulation results are illustrated in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Assume that the attack signal exists in the system from the initial stage and evolves with changes in the system state, thereby influencing the system. When the FDI attack intrudes the system, the NN estimation process of the attack signal is illustrated in Figure 2. As shown, during the initial period, the discrepancy between the actual attack signal and its estimate is relatively large due to the significant gap between the initial and ideal weights. However, over time, this error gradually converges to zero, demonstrating the success of the approach. In addition, the convergence process of the system state during the NN training is shown in Figure 3a. When the system state converges to zero, it indicates that the NN has successfully approximated the attack signal and can effectively compensate for the attack occurring in the system. Furthermore, as shown in Figure 3b, when only the LQR method is used, the system’s state trajectory fails to converge to zero. However, when employing the NN adaptive estimator, the system state converges as evidenced by Figure 4. In a system without an adaptive controller, the presence of the attack signal prevents the system states affected by maliciously altered information from converging to zero. Additionally, other states not directly impacted are also influenced to some extent, failing to achieve complete convergence to zero. This comparison highlights the necessity of addressing FDI attacks. If such attacks are left unaddressed, the system states may become uncontrollable.
After addressing the system security issues using the adaptive controller, the aspect of safety is also considered in this simulation phase. For simplicity, the control barrier function h ( x ) = 9 ( x 3 1 ) 2 ( x 5 1 ) 2 is adopted, which ensures that certain system states remain within predetermined bounds, and γ = 0.9 . To handle the unknown variable in the system, the designed approach is applied, for simplicity, incorporating a function represented here as a constant matrix ξ ( Δ ) Δ M = [ 0.4 ] . The ideal controller is aforementioned u ( k ) = K x ( k ) W 𝟊 ( x ^ ( k ) ) , where W represents the neural network-trained weights approximating the ideal weights W. The safety simulation results are presented in Figure 5a. When physical constraints are imposed on the system, we can see that the black dashed line is not completely enclosed by the blue solid line, indicating that the system state exceeds the preset range. Furthermore, the red solid line in Figure 5a represents the state trajectory when using the CBF. Although system safety is maintained, the state trajectories still do not converge to 0 as shown in Figure 5b. However, when the proposed scheme is applied, the system state remains within the preset range and converges to 0 as shown in Figure 6. Therefore, it is clear that the proposed control approach effectively guarantees both the security and safety of the system, while also demonstrating its superiority over the LQR method.

5. Conclusions

In the paper, we have proposed a novel safety control strategy for CPSs under FDI attacks, while ensuring that physical safety constraints are maintained. Our approach handles two main challenges: (i) effectively mitigating the effect of FDI attacks on system performance, and (ii) ensuring system safety despite the presence of unknown attacks. By designing a neural network-based attack estimator, estimates of the FDI attacks were obtained. This estimation was integrated into the control algorithm, allowing the controller to adjust its actions based on both the baseline control and the estimated attack information. Additionally, the CBF was employed to enforce safety constraints, dynamically adapting to attack conditions by incorporating attack estimates into the safety logic. Finally, simulation results have shown that the designed safety strategy successfully mitigates the influence of FDI attacks, maintaining system safety while ensuring its stability.

Author Contributions

L.X. and Y.Z. contributed to the methodology of this paper; L.X. derived the theoretical process and wrote the paper; Z.L. and Q.Z. reviewed the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Science Foundation of China under Grant 62033005, Grant 62320106001, Grant 62203136.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

I would like to express my sincere gratitude to Chengwei Wu for his invaluable assistance in reviewing and thoroughly checking this manuscript. His detailed guidance on revising and refining the paper has greatly improved its overall quality. I deeply appreciate his time, expertise, and support.

Conflicts of Interest

There are no conflicts of interest regarding the research for this article.

References

  1. Alguliyev, R.; Imamverdiyev, Y.; Sukhostat, L. Cyber-physical systems and their security issues. Comput. Ind. 2018, 100, 212–223. [Google Scholar] [CrossRef]
  2. Ren, H.; Zhang, C.; Ma, H.; Li, H. Cloud-based distributed group asynchronous consensus for switched nonlinear cyber-physical systems. IEEE Trans. Ind. Inform. 2024, 21, 693–702. [Google Scholar] [CrossRef]
  3. Yin, L.; Wu, C.; Zhu, H.; Chen, Y.; Zhang, Q. Secure Control for Cyber–Physical Systems Subject to Aperiodic DoS Attacks. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 7106–7116. [Google Scholar] [CrossRef]
  4. Jahromi, A.N.; Karimipour, H.; Dehghantanha, A.; Choo, K.K.R. Toward detection and attribution of cyber-attacks in IoT-enabled cyber–physical systems. IEEE Internet Things J. 2021, 8, 13712–13722. [Google Scholar] [CrossRef]
  5. Kure, H.I.; Islam, S.; Razzaque, M.A. An integrated cyber security risk management approach for a cyber-physical system. Appl. Sci. 2018, 8, 898. [Google Scholar] [CrossRef]
  6. Habib, A.A.; Hasan, M.K.; Alkhayyat, A.; Islam, S.; Sharma, R.; Alkwai, L.M. False data injection attack in smart grid cyber physical system: Issues, challenges, and future direction. Comput. Electr. Eng. 2023, 107, 108638. [Google Scholar] [CrossRef]
  7. Li, Y.; Wei, X.; Li, Y.; Dong, Z.; Shahidehpour, M. Detection of false data injection attacks in smart grid: A secure federated deep learning approach. IEEE Trans. Smart Grid 2022, 13, 4862–4872. [Google Scholar] [CrossRef]
  8. Fu, H.; Krishnamurthy, P.; Khorrami, F. Combining switching mechanism with re-initialization and anomaly detection for resiliency of cyber–physical systems. Automatica 2025, 172, 111994. [Google Scholar] [CrossRef]
  9. Zhai, L.; Vamvoudakis, K.G. Data-based and secure switched cyber–physical systems. Syst. Control Lett. 2021, 148, 104826. [Google Scholar] [CrossRef]
  10. Lu, A.Y.; Yang, G.H. False data injection attacks against state estimation without knowledge of estimators. IEEE Trans. Autom. Control 2022, 67, 4529–4540. [Google Scholar] [CrossRef]
  11. Wu, C.; Yao, W.; Luo, W.; Pan, W.; Sun, G.; Xie, H.; Wu, L. A secure robot learning framework for cyber attack scheduling and countermeasure. IEEE Trans. Robot. 2023, 39, 3722–3738. [Google Scholar] [CrossRef]
  12. Franze, G.; Famularo, D.; Lucia, W.; Tedesco, F. Cyber–physical systems subject to false data injections: A model predictive control framework for resilience operations. Automatica 2023, 152, 110957. [Google Scholar] [CrossRef]
  13. Yin, L.; Wu, C.; Xu, L.; Zhu, H.; Shao, X.; Yao, W.; Liu, J.; Wu, L. Event-Triggered Secure Control Under Aperiodic DoS Attacks. IEEE Trans. Autom. Sci. Eng. 2024; early access. [Google Scholar] [CrossRef]
  14. Sarangapani, J. Neural Network Control of Nonlinear Discrete-Time Systems; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  15. Sahoo, A.; Xu, H.; Jagannathan, S. Adaptive neural network-based event-triggered control of single-input single-output nonlinear discrete-time systems. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 151–164. [Google Scholar] [CrossRef]
  16. Wang, D.; Zhao, M.; Ha, M.; Hu, L. Adaptive-critic-based hybrid intelligent optimal tracking for a class of nonlinear discrete-time systems. Eng. Appl. Artif. Intell. 2021, 105, 104443. [Google Scholar] [CrossRef]
  17. Tang, F.; Niu, B.; Zong, G.; Zhao, X.; Xu, N. Periodic event-triggered adaptive tracking control design for nonlinear discrete-time systems via reinforcement learning. Neural Netw. 2022, 154, 43–55. [Google Scholar] [CrossRef]
  18. Sargolzaei, A.; Yazdani, K.; Abbaspour, A.; Crane, C.D., III; Dixon, W.E. Detection and mitigation of false data injection attacks in networked control systems. IEEE Trans. Ind. Inform. 2019, 16, 4281–4292. [Google Scholar] [CrossRef]
  19. Yan, W.; Mestha, L.K.; Abbaszadeh, M. Attack detection for securing cyber physical systems. IEEE Internet Things J. 2019, 6, 8471–8481. [Google Scholar] [CrossRef]
  20. Farivar, F.; Haghighi, M.S.; Jolfaei, A.; Alazab, M. Artificial intelligence for detection, estimation, and compensation of malicious attacks in nonlinear cyber-physical systems and industrial IoT. IEEE Trans. Ind. Inform. 2019, 16, 2716–2725. [Google Scholar] [CrossRef]
  21. Santoso, F.; Finn, A. A data-driven cyber–physical system using deep-learning convolutional neural networks: Study on false-data injection attacks in an unmanned ground vehicle under fault-tolerant conditions. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 346–356. [Google Scholar] [CrossRef]
  22. Zhao, D.; Shi, M.; Zhang, H.; Liu, Y.; Zhao, N. Event-triggering adaptive neural network output feedback control for networked systems under false data injection attacks. Chaos Solitons Fractals 2024, 180, 114584. [Google Scholar] [CrossRef]
  23. Ansari-Bonab, P.; Holland, J.C.; Cunningham-Rush, J.; Noei, S.; Sargolzaei, A. Secure Control Design for Cooperative Adaptive Cruise Control Under False Data Injection Attack. IEEE Trans. Intell. Transp. Syst. 2024, 25, 9723–9732. [Google Scholar] [CrossRef]
  24. Chen, G.; Wu, T.; Li, X.; Zhang, Y. Secure and Safe Control of Connected and Automated Vehicles Against False Data Injection Attacks. IEEE Trans. Intell. Transp. Syst. 2024, 25, 12347–12360. [Google Scholar] [CrossRef]
  25. Mousavinejad, E.; Yang, F.; Han, Q.L.; Vlacic, L. A novel cyber attack detection method in networked control systems. IEEE Trans. Cybern. 2018, 48, 3254–3264. [Google Scholar] [CrossRef]
  26. Wang, X.; Ding, D.; Ge, X.; Han, Q.L. Neural-network-based control for discrete-time nonlinear systems with denial-of-service attack: The adaptive event-triggered case. Int. J. Robust Nonlinear Control 2022, 32, 2760–2779. [Google Scholar] [CrossRef]
  27. Wu, C.; Pan, W.; Staa, R.; Liu, J.; Sun, G.; Wu, L. Deep reinforcement learning control approach to mitigating actuator attacks. Automatica 2023, 152, 110999. [Google Scholar] [CrossRef]
  28. Alan, A.; Taylor, A.J.; He, C.R.; Ames, A.D.; Orosz, G. Control barrier functions and input-to-state safety with application to automated vehicles. IEEE Trans. Control Syst. Technol. 2023, 31, 2744–2759. [Google Scholar] [CrossRef]
  29. Seo, J.; Lee, J.; Baek, E.; Horowitz, R.; Choi, J. Safety-critical control with nonaffine control inputs via a relaxed control barrier function for an autonomous vehicle. IEEE Robot. Autom. Lett. 2022, 7, 1944–1951. [Google Scholar] [CrossRef]
  30. Ames, A.D.; Xu, X.; Grizzle, J.W.; Tabuada, P. Control barrier function based quadratic programs for safety critical systems. IEEE Trans. Autom. Control 2016, 62, 3861–3876. [Google Scholar] [CrossRef]
  31. Ames, A.D.; Grizzle, J.W.; Tabuada, P. Control barrier function based quadratic programs with application to adaptive cruise control. In Proceedings of the 53rd IEEE Conference on Decision and Control (CDC), Los Angeles, CA, USA, 15–17 December 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 6271–6278. [Google Scholar]
  32. Robey, A.; Hu, H.; Lindemann, L.; Zhang, H.; Dimarogonas, D.V.; Tu, S.; Matni, N. Learning control barrier functions from expert demonstrations. In Proceedings of the 2020 59th IEEE Conference on Decision and Control (CDC), Jeju, Republic of Korea, 14–18 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 3717–3724. [Google Scholar]
  33. Zeng, J.; Zhang, B.; Sreenath, K. Safety-critical model predictive control with discrete-time control barrier function. In Proceedings of the 2021 American Control Conference (ACC), New Orleans, LA, USA, 25–28 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 3882–3889. [Google Scholar]
  34. Xiao, W.; Wang, T.H.; Hasani, R.; Chahine, M.; Amini, A.; Li, X.; Rus, D. Barriernet: Differentiable control barrier functions for learning of safe robot control. IEEE Trans. Robot. 2023, 39, 2289–2307. [Google Scholar] [CrossRef]
  35. Daş, E.; Murray, R.M. Robust safe control synthesis with disturbance observer-based control barrier functions. In Proceedings of the 2022 IEEE 61st Conference on Decision and Control (CDC), Cancun, Mexico, 6–9 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 5566–5573. [Google Scholar]
  36. Agrawal, D.R.; Panagou, D. Safe and robust observer-controller synthesis using control barrier functions. IEEE Control Syst. Lett. 2022, 7, 127–132. [Google Scholar] [CrossRef]
  37. Kong, X.Y.; Yang, G.H. An intrusion detection method based on self-generated coding technology for stealthy false data injection attacks in train-ground communication systems. IEEE Trans. Ind. Electron. 2022, 70, 8468–8476. [Google Scholar] [CrossRef]
  38. Liu, H.; Zhang, Y.; Li, Y.; Niu, B. Proactive attack detection scheme based on watermarking and moving target defense. Automatica 2023, 155, 111163. [Google Scholar] [CrossRef]
  39. Sun, J.; Yang, J.; Zeng, Z. Safety-critical control with control barrier function based on disturbance observer. IEEE Trans. Autom. Control 2024, 69, 4750–4756. [Google Scholar] [CrossRef]
  40. Lewis, F.L.; Vamvoudakis, K.G. Reinforcement learning for partially observable dynamic processes: Adaptive dynamic programming using measured output data. IEEE Trans. Syst. Man Cybern. Part (Cybern.) 2010, 41, 14–25. [Google Scholar] [CrossRef] [PubMed]
  41. Shen, Z.; Yang, H.; Zhang, S. Neural network approximation: Three hidden layers are enough. Neural Netw. 2021, 141, 160–173. [Google Scholar] [CrossRef]
  42. Liu, Y.J.; Tong, S. Adaptive NN tracking control of uncertain nonlinear discrete-time systems with nonaffine dead-zone input. IEEE Trans. Cybern. 2014, 45, 497–505. [Google Scholar] [CrossRef]
  43. Yeşildirek, A.; Lewis, F.L. Feedback linearization using neural networks. Automatica 1995, 31, 1659–1664. [Google Scholar] [CrossRef]
  44. Sahoo, A.; Xu, H.; Jagannathan, S. Neural network-based event-triggered state feedback control of nonlinear continuous-time systems. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 497–509. [Google Scholar] [CrossRef]
  45. Xiong, Y.; Zhai, D.H.; Tavakoli, M.; Xia, Y. Discrete-time control barrier function: High-order case and adaptive case. IEEE Trans. Cybern. 2022, 53, 3231–3239. [Google Scholar] [CrossRef]
  46. Agrawal, A.; Sreenath, K. Discrete control barrier functions for safety-critical control of discrete systems with application to bipedal robot navigation. In Proceedings of the Robotics: Science and Systems, Cambridge, MA, USA, 12–16 July 2017; Volume 13, pp. 1–10. [Google Scholar]
  47. Eressa, M.R.; Zheng, D.; Han, M. PID and neural net controller performance comparsion in UAV pitch attitude control. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 000762–000767. [Google Scholar]
Figure 1. The proposed control strategy for CPSs under the FDI attack.
Figure 1. The proposed control strategy for CPSs under the FDI attack.
Electronics 14 01103 g001
Figure 2. Neural network estimation process: (a) attack signal estimation, (b) estimation error.
Figure 2. Neural network estimation process: (a) attack signal estimation, (b) estimation error.
Electronics 14 01103 g002
Figure 3. System state trajectories under attack: (a) system state approximation process, (b) system state without adopting control strategy.
Figure 3. System state trajectories under attack: (a) system state approximation process, (b) system state without adopting control strategy.
Electronics 14 01103 g003
Figure 4. State trajectories with adaptive controller.
Figure 4. State trajectories with adaptive controller.
Electronics 14 01103 g004
Figure 5. System state with the LQR method under the attack: (a) state trajectories of x 3 x 5 plain, and (b) state trajectories.
Figure 5. System state with the LQR method under the attack: (a) state trajectories of x 3 x 5 plain, and (b) state trajectories.
Electronics 14 01103 g005
Figure 6. System state with the NN-CBF strategy under the attack: (a) State trajectories of x 3 x 5 plain, (b) State trajectories.
Figure 6. System state with the NN-CBF strategy under the attack: (a) State trajectories of x 3 x 5 plain, (b) State trajectories.
Electronics 14 01103 g006
Table 1. Comparisons with the existing methods.
Table 1. Comparisons with the existing methods.
Secure
Control
  Detection  Estimation Optimal
Control
Neural
Network
Safe
Control
Proactive
Defense
[37]
[38]
[23]
[39]
our
paper
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, L.; Zhu, Y.; Li, Z.; Zhang, Q. Safety Control for Cyber–Physical Systems Under False Data Injection Attacks. Electronics 2025, 14, 1103. https://doi.org/10.3390/electronics14061103

AMA Style

Xu L, Zhu Y, Li Z, Zhang Q. Safety Control for Cyber–Physical Systems Under False Data Injection Attacks. Electronics. 2025; 14(6):1103. https://doi.org/10.3390/electronics14061103

Chicago/Turabian Style

Xu, Lezhong, Yupeng Zhu, Zhuoyu Li, and Quanqi Zhang. 2025. "Safety Control for Cyber–Physical Systems Under False Data Injection Attacks" Electronics 14, no. 6: 1103. https://doi.org/10.3390/electronics14061103

APA Style

Xu, L., Zhu, Y., Li, Z., & Zhang, Q. (2025). Safety Control for Cyber–Physical Systems Under False Data Injection Attacks. Electronics, 14(6), 1103. https://doi.org/10.3390/electronics14061103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop