Attack–Defense Confrontation Analysis and Optimal Defense Strategy Selection Using Hybrid Game Theoretic Methods
Abstract
:1. Introduction
 (1)
 MGbased electricity markets model of an SG power system is considered for FDI attacks. The electricity market is established with a doublesided bidding mechanism using the Nash equilibrium game theory.
 (2)
 A hybrid game model of payoff function between attack and defense is established. Game theoretic methods are proposed to discuss the interaction behavior between attack and defense sides.
 (3)
 The benefits according to attack and defense strategies are quantified using the evolutionary game method, and the dynamic evolutionary learning of attack and defense probability is discussed.
 (4)
 The optimal defense strategy is selected after the evolutionary stable equilibrium solution is solved, and the dynamic confrontation trend of both sides is studied.
2. Electricity Markets Attack Model
2.1. Electricity Markets Model with MGs
 (1)
 Strategy for MGs: In the bidding process, the strategy of participants is formulated based on the range of power generation capacity in a specific strategic space:$${P}_{{MG}_{i},\tau}\in \left\{{\Phi}_{{MG}_{i},\tau}=\left[{P}_{{MG}_{i},\tau}^{\mathrm{min}},{P}_{{MG}_{i},\tau}^{\mathrm{max}}\right]\right\}$$
 (2)
 Bidding profit function ${B}_{{i}^{\prime}\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}\right)$ for the ${i}^{\prime}\mathrm{th}$ player:$${B}_{{i}^{\prime}\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}\right)={\lambda}_{ref,{i}^{\prime}\tau}{P}_{{MG}_{{i}^{\prime}},\tau}(1+{\theta}_{{i}^{\prime}\tau}){C}_{{i}^{\prime}\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}\right)$$$$\begin{array}{l}\mathrm{maxmize}{B}_{{i}^{\prime}\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}\right)\\ subjectto{\displaystyle \sum _{{i}^{\prime}=1}^{{M}_{g}}{P}_{{MG}_{{i}^{\prime}},\tau}}+{\displaystyle \sum _{{i}^{\prime}=1}^{{M}_{g}}{P}_{exchange{i}^{\prime},\tau}}={\displaystyle \sum _{{j}^{\prime}=1}^{D}{L}_{{j}^{\prime}\tau}}\\ {P}_{{MG}_{{i}^{\prime}},\tau}^{\mathrm{min}}\le {P}_{{MG}_{{i}^{\prime}},\tau}\le {P}_{{MG}_{{i}^{\prime}},\tau}^{\mathrm{max}}\end{array}$$$${L}_{Lagrange{i}^{\prime},\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}\right)={B}_{{i}^{\prime}\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}\right)+{\mu}_{{i}^{\prime}\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}^{\mathrm{min}}{P}_{{MG}_{{i}^{\prime}},\tau}\right)+{\upsilon}_{{i}^{\prime}\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}{P}_{{MG}_{{i}^{\prime}},\tau}^{\mathrm{max}}\right)$$
 (3)
 Nash equilibrium game: The Nash equilibrium $\left({P}_{MG1,\tau},{P}_{MG2,\tau},\dots ,{P}_{MG{M}_{g},\tau}\right)$ means that when ${P}_{{MG}_{{i}^{\prime}},\tau}$ is implemented in the game, no player can gain additional profits through changing their power generation scheduling. The strategy set of all players can be calculated through the following optimal iterations:$$\frac{\partial {L}_{Lagrange{i}^{\prime},\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}\right)}{\partial {P}_{{MG}_{{i}^{\prime}},\tau}}=\frac{\partial {B}_{{i}^{\prime}\tau}\left({P}_{{MG}_{{i}^{\prime}},\tau}\right)}{\partial {P}_{{MG}_{{i}^{\prime}},\tau}}{\mu}_{{i}^{\prime}\tau}+{\upsilon}_{{i}^{\prime}\tau}=0$$
2.2. Attack Model of the RealTime Market
3. Behavior Model of Attackers and Defenders
3.1. Attack Payoff Modeling
3.2. Defense Payoff Modeling
3.3. Hybrid Game Method
 (1)
 ${\Omega}_{A}=\left({\Omega}_{A1},{\Omega}_{A2},\dots ,{\Omega}_{An}\right)$ represents the game strategy space of attackers;
 (2)
 ${\Omega}_{D}=\left({\Omega}_{D1},{\Omega}_{D2},\dots ,{\Omega}_{Dm}\right)$ represents the game strategy space of defenders; n and m are the numbers of offense and defense, respectively, and n, m $\ge $ 2;
 (3)
 ${U}_{A}=\left({U}_{A1},{U}_{A2},\dots ,{U}_{An}\right)$ represents the payoff function of attackers corresponding to the strategies of $\left({\Omega}_{A},{\Omega}_{D}\right)$;
 (4)
 ${U}_{D}=\left({U}_{D1},{U}_{D2},\dots ,{U}_{Dm}\right)$ is the payoff function of defenders;
 (5)
 ${p}_{A}=\left({p}_{A1},{p}_{A2},\dots ,{p}_{An}\right)$ is the probability of attackers corresponding to $\left({\Omega}_{A},{\Omega}_{D}\right)$;
 (6)
 ${q}_{D}=\left({q}_{D1},{q}_{D2},\dots ,{q}_{Dm}\right)$ is the probability of defenders
3.3.1. Stackelberg Equilibrium
3.3.2. Markov Game Solution
3.3.3. Distributed Learning Algorithm
4. Design of the ADEG strategy
4.1. ADEG Model and Analysis
4.1.1. ADEG Model Construction
 (1)
 Profits of all the players:$${U}_{Au}={\displaystyle \sum _{v=1}^{m}{q}_{Dv}{a}_{uv}},\overline{{U}_{A}}={\displaystyle \sum _{u=1}^{n}{p}_{Au}{U}_{Au}}$$$${U}_{Dv}={\displaystyle \sum _{u=1}^{n}{p}_{Au}{b}_{uv}},\overline{{U}_{D}}={\displaystyle \sum _{v=1}^{m}{q}_{Dv}{U}_{Dv}}$$
 (2)
 Replication dynamic equations for the profits:$$\frac{d{p}_{Au}}{dt}={p}_{Au}\left({U}_{Au}\overline{{U}_{A}}\right)$$$$\frac{d{q}_{Dv}}{dt}={q}_{Dv}\left({U}_{Dv}\overline{{U}_{D}}\right)$$
 (3)
 Stable equilibrium solution:$$\left(\right)open="\{">\begin{array}{l}\frac{d{p}_{Au}}{dt}=0\\ \frac{d{q}_{Dv}}{dt}=0\end{array}$$
4.1.2. Evolutionary Stable Solution
 (1)
 Expected and average profit of a defender:$${U}_{D1}={p}_{A1}{b}_{11}+{p}_{A2}{b}_{21}$$$${U}_{D2}={p}_{A1}{b}_{12}+{p}_{A2}{b}_{22}$$$${\overline{U}}_{D}={q}_{D1}{U}_{D1}+{q}_{D2}{U}_{D2}$$$$\left(\right)open="\{">\begin{array}{l}\frac{d{q}_{D1}(t)}{dt}={q}_{D1}({U}_{D1}{\overline{U}}_{D})\\ \frac{d{q}_{D2}(t)}{dt}={q}_{D2}({U}_{D2}{\overline{U}}_{D})\end{array}$$
 (2)
 Expected and average profit of an attacker:$${U}_{A1}={q}_{D1}{a}_{11}+{q}_{D2}{a}_{21}$$$${U}_{A2}={q}_{D1}{a}_{12}+{q}_{D2}{a}_{22}$$$${\overline{U}}_{A}={p}_{A1}{U}_{A1}+{p}_{A2}{U}_{A2}$$$$\left(\right)open="\{">\begin{array}{l}\frac{d{p}_{A1}(t)}{dt}={p}_{A1}({U}_{A1}{\overline{U}}_{A})\\ \frac{d{p}_{A2}(t)}{dt}={p}_{A2}({U}_{A2}{\overline{U}}_{A})\end{array}$$
 (3)
 Based on the proposed ADEG model, when $\left(\right)open="\{">\begin{array}{l}\frac{d{q}_{D1}(t)}{dt}=\frac{d{q}_{D2}(t)}{dt}\\ \frac{d{p}_{A1}(t)}{dt}=\frac{d{p}_{A2}(t)}{dt}\end{array}$ is satisfied, the solution can be obtained only through calculating $\left(\right)open="\{">\begin{array}{l}\frac{d{q}_{D1}(t)}{dt}=0\\ \frac{d{p}_{A1}(t)}{dt}=0\end{array}$. Through the equations, five sets of solutions for ${p}_{A1}$ and ${q}_{D1}$ can be obtained in Table 1.
4.2. ADEGBased Optimal Defense Strategy Selection
 (1)
 For the defender, when ${p}_{A1}=\frac{{b}_{22}{b}_{21}}{{b}_{11}{b}_{21}{b}_{12}+{b}_{22}}$, with any probability selection of defense strategy ${q}_{D1}$, there is $\frac{d{q}_{D1}(t)}{dt}=0$. However, once the value of ${p}_{A1}$ shifts, $\frac{d{q}_{D1}(t)}{dt}$ will change dramatically, indicating that the state represented by the graph is not stable; once ${p}_{A1}\ne \frac{{b}_{22}{b}_{21}}{{b}_{11}{b}_{21}{b}_{12}+{b}_{22}}$, ${q}_{D1}=0$ and ${q}_{D1}=1$ are two stable states. According to these, when ${p}_{A1}>\frac{{b}_{22}{b}_{21}}{{b}_{11}{b}_{21}{b}_{12}+{b}_{22}}$, ${q}_{D1}=0$ is the evolution stable strategy; when ${p}_{A1}<\frac{{b}_{22}{b}_{21}}{{b}_{11}{b}_{21}{b}_{12}+{b}_{22}}$, ${q}_{D1}=1$ is the strategy.
 (2)
 For the attacker, when ${q}_{D1}>\frac{{a}_{22}{a}_{21}}{{a}_{11}{a}_{21}{a}_{12}+{a}_{22}}$, ${p}_{A1}=1$ is the strategy; when ${q}_{D1}<\frac{{a}_{22}{a}_{21}}{{a}_{11}{a}_{21}{a}_{12}+{a}_{22}}$, ${p}_{A1}=0$ is the strategy; when ${q}_{D1}=\frac{{a}_{22}{a}_{21}}{{a}_{11}{a}_{21}{a}_{12}+{a}_{22}}$, for any ${p}_{A1}$, there is $\frac{d{p}_{A1}(t)}{dt}=0$. However, once the value of ${q}_{D1}$ is shifted, $\frac{d{p}_{A1}(t)}{dt}$ will undergo significant changes, leading to an unstable state.
Algorithm 1. Process of obtaining the optimal defense strategy 
Phase 1: 





Phase 2 
${Q}_{A}\left(s,{\Omega}_{A},{\Omega}_{D}\right)$, ${Q}_{D}\left(s,{\Omega}_{A},{\Omega}_{D}\right)$, ${V}_{A}\left({s}^{\prime}\right)$, ${V}_{D}\left({s}^{\prime}\right)$ 

while Not Converged do 
Collect the payoff ${r}_{\omega}\left(t\right)$ 
Update the strategy ${q}^{\left(\omega \right)}\left(t+1\right)={q}^{\left(\omega \right)}\left(t\right)+\epsilon {r}_{\omega}\left(t\right)\left({e}^{\left(\omega \right)}\left(t\right){q}^{\left(\omega \right)}\left(t\right)\right)$ 
Check Convergence 
if Converged then 
break 
end if 
end while 







5. Discussion
5.1. Optimal Attack–Defense Strategies Game Model
5.1.1. Players with Complete Information
 a.
 IEEE 14bus system
 (1)
 When the attacker chooses ${\Omega}_{A1}$, the defender will choose ${\Omega}_{D1}$ to defend as the cost of ${C}_{D}$ in ${\Omega}_{D2}$ is high;
 (2)
 When the attacker chooses ${\Omega}_{A2}$, the defender will choose ${\Omega}_{D2}$ to defend as the cost ${C}_{G}$, ${C}_{M}$ in ${\Omega}_{D1}$ is high;
 (3)
 To achieve the best defense effect, the defender will choose ${\Omega}_{D1}$ when the attacker selects ${\Omega}_{A1}$, and the defender will choose ${\Omega}_{D2}$ when the attacker selects ${\Omega}_{A2}$;
 (4)
 When the players choose the Markov game algorithm, the profit values both increase, but the costs for both the players are also increased;
 (5)
 When the distributed learning algorithm is employed, the costs for two players are decreased obviously. The optimal payoff values are obtained.
 (1)
 When the defender chooses ${\Omega}_{D1}$, the power generation costs of MG2 and MG3 are low, thus the power generation is close to full output. The power generation of MG8 is lower, and MG6 keeps maintaining an isolated mode as the power generation cost is high.
 (2)
 When the attacker chooses ${\Omega}_{D2}$, the outputs of MG2 and MG6 remain at the same level of value. MG2 needs to purchase electricity to meet sudden changes in load demand, and MG8′s output decreases to prevent a sharp increase in the cost of the entire power system. It is obvious that when MG3 cannot purchase electricity, the output of MG8 increases sharply.
 b.
 IEEE 118bus system
 (1)
 Compared with the IEEE14 system, the effect of the distributed learning algorithm in the IEEE118 system is better, but the computational complexity increases, thus the value of ${C}_{M}$ is higher;
 (2)
 The IEEE118 system is more complex with many generators and loads, and ${C}_{D}$ and ${C}_{G}$ are higher, but the profit of the attacker is lower, and the defense effect is better;
 (3)
 The difference between defense cost is smaller in the IEEE118 system, and the proposed game method is numerically better than other game methods at the equilibrium point, indicating that the method proposed in this paper is suitable for complex large systems.
5.1.2. Players with Partial Information
 (1)
 When the players have partial information, payoff on both sides is reduced, as the attack is not established completely and the defense is not using the optimal strategy;
 (2)
 In this situation, the cost of ${C}_{G}$ is higher than ${C}_{D}$, and defenders choose to readjust the generator output instead of shedding loads;
 (3)
 When the mixed strategy is chosen, the defense effect of the IEEE118 system is better than the IEEE14 system because the generator and load are more complex.
5.2. ADEGBased Optimal Defense Strategy Selection
5.2.1. Effectiveness of the ADEG Model
 (1)
 From Figure 5a–d, when ${p}_{A}=1$ or ${p}_{A}=0$, the final selection probability of the attacker is always 1 or 0. Similarly when ${q}_{D}=1$ or ${q}_{D}=0$, the final selection probability of the defender is always 1 or 0;
 (2)
 From Figure 5e–h, when the mixed probability strategies are selected, the stable solution will be $\frac{{b}_{22}{b}_{21}}{{b}_{11}{b}_{21}{b}_{12}+{b}_{22}}=0.51$ and $\frac{{a}_{22}{a}_{21}}{{a}_{11}{a}_{21}{a}_{12}+{a}_{22}}=0.59$. Then, when the initial attack probability is larger than 0.51, the final selection probability of the defender will eventually change to 1; when the initial attack probability is less than 0.51, the final selection probability of the defender will eventually change to 0. Similarly for the attacker, when the initial defense probability is larger than 0.59, the final selection probability of the attacker will eventually change to 1; when the initial defense probability is less than 0.59, the final selection probability of the attacker will eventually change to 0;
 (3)
 To further illustrate the dynamic effect of ADEG model, and verify the probability that the two players can be affected by each other, the following scenarios are proposed as shown in Figure 5i,j:
 (a)
 Situation 1: the initial probability of ${p}_{A}$ and ${q}_{D}$ are selected as fixed values of 0.2 and 0.9. When t = 5 s, we make ${p}_{A}$ suddenly rise to 0.8, and observe the variation of ${q}_{D}$;
 (b)
 Situation 2: the initial probability of ${p}_{A}$ and ${q}_{D}$ are selected as fixed values of 0.5 and 0. When t = 5 s, we make ${q}_{D}$ suddenly rise to 0.9, and observe the variation of ${p}_{A}$.
5.2.2. Final Defense Strategies Selection
 (1)
 For the fixed value of the evolutionary stable solutions, the final optimal defense strategies are selected in Table 10;
 (2)
 For the mixed selection of initial probabilities, the final optimal defense strategies are selected in Table 11;
 (3)
 When the mixed selection of initial probabilities changes in real time, the final optimal defense strategies are selected in Table 12;
 (4)
 To illustrate the effectiveness of the ADEG model in selecting strategies, dynamic situations with a constant value of initial probabilities have been shown in Table 13.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
 Shahid, K.; Nainar, K.; Olsen, R.L.; Iov, F.; Lyhne, M.; Morgante, G. On the Use of Common Information Model for Smart Grid Applications—A Conceptual Approach. IEEE Trans. Smart Grid 2021, 12, 5060–5072. [Google Scholar] [CrossRef]
 Habibi, M.R.; Baghaee, H.R.; Blaabjerg, F.; Dragičević, T. Secure MPC/ANNBased False Data Injection CyberAttack Detection and Mitigation in DC Microgrids. IEEE Syst. J. 2022, 16, 1487–1498. [Google Scholar] [CrossRef]
 Li, B.; Lu, R.; Xiao, G.; Li, T.; Choo, K.K.R. Detection of False Data Injection Attacks on Smart Grids: A ResilienceEnhanced Scheme. IEEE Trans. Power Syst. 2021, 37, 2679–2692. [Google Scholar] [CrossRef]
 Rodríguez, M.; Betarte, G.; Calegari, D. A Process Miningbased approach for Attacker Profiling. In Proceedings of the 2021 IEEE URUCON, Montevideo, Uruguay, 24–26 November 2021; pp. 425–429. [Google Scholar]
 Sandal, Y.S.; Pusane, A.E.; Kurt, G.K.; Benedetto, F. Reputation Based Attacker Identification Policy for MultiAccess Edge Computing in Internet of Things. IEEE Trans. Veh. Technol. 2020, 69, 15346–15356. [Google Scholar] [CrossRef]
 Rappoport, J.S. The Problem of Approach of Controlled Objects in Dynamic Game Problems with a Terminal Payoff Function. Cybern. Syst. Anal. 2020, 56, 820–834. [Google Scholar] [CrossRef]
 Zhang, B.; Dou, C.; Yue, D.; Park, J.H.; Zhang, Y.; Zhang, Z. Game and Dynamic Communication PathBased Pricing Strategies for Microgrids under Communication Interruption. IEEE/CAA J. Autom. Sin. 2023, 10, 1032–1047. [Google Scholar] [CrossRef]
 Aydeger, A.; Manshaei, M.H.; Rahman, M.A.; Akkaya, K. Strategic Defense Against Stealthy Link Flooding Attacks: A Signaling Game Approach. IEEE Trans. Netw. Sci. Eng. 2021, 8, 751–764. [Google Scholar] [CrossRef]
 Liu, Z.; Wang, L. Defense Strategy Against Load Redistribution Attacks on Power Systems Considering Insider Threats. IEEE Trans. Smart Grid 2021, 12, 1529–1540. [Google Scholar] [CrossRef]
 Pirani, M.; Nekouei, E.; Sandberg, H.; Johansson, K.H. A GraphTheoretic Equilibrium Analysis of AttackerDefender Game on Consensus Dynamics under H_{2} Performance Metric. IEEE Trans. Netw. Sci. Eng. 2021, 8, 1991–2000. [Google Scholar] [CrossRef]
 Li, Y.; Bai, S.; Gao, Z. A MultiDomain AntiJamming Strategy Using Stackelberg Game in Wireless Relay Networks. IEEE Access 2020, 8, 173609–173617. [Google Scholar] [CrossRef]
 Jakóbik, A. Stackelberg Game Modeling of Cloud Security Defending Strategy in the Case of Information Leaks and Corruption. Simul. Model. Pract. Theory 2020, 103, 102071. [Google Scholar] [CrossRef]
 Chen, Z.; Cui, G.; Zhang, L.; Yang, X.; Li, H.; Zhao, Y.; Ma, C.; Sun, T. Optimal Strategy for Cyberspace Mimic Defense Based on Game Theory. IEEE Access 2021, 9, 68376–68386. [Google Scholar] [CrossRef]
 Zhou, Y.; Cheng, G.; Zhao, Y.; Chen, Z.; Jiang, S. Toward Proactive and Efficient DDoS Mitigation in IIoT Systems: A Moving Target Defense Approach. IEEE Trans. Ind. Inform. 2022, 18, 2734–2744. [Google Scholar] [CrossRef]
 Zhang, Z.; Huang, S.; Chen, Y.; Li, B.; Mei, S. CyberPhysical Coordinated Risk Mitigation in Smart Grids Based on AttackDefense Game. IEEE Trans. Power Syst. 2022, 37, 530–542. [Google Scholar] [CrossRef]
 Emadi, H.; Clanin, J.; Hyder, B.; Khanna, K.; Govindarasu, M.; Bhattacharya, S. An Efficient Computational Strategy for CyberPhysical Contingency Analysis in Smart Grids. In Proceedings of the 2021 IEEE Power & Energy Society General Meeting (PESGM), Washington, DC, USA, 26–29 July 2021; pp. 1–5. [Google Scholar]
 Shi, Y.; Rong, Z. Analysis of QLearning Like Algorithms Through Evolutionary Game Dynamics. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 2463–2467. [Google Scholar] [CrossRef]
 Zhang, H.; Tan, J.; Liu, X.; Huang, S.; Hu, H.; Zhang, Y. Cybersecurity Threat Assessment Integrating Qualitative Differential and Evolutionary Games. IEEE Trans. Netw. Serv. Manag. 2022, 19, 3425–3437. [Google Scholar] [CrossRef]
 Chen, G.; Yu, Y. Convergence Analysis and Strategy Control of Evolutionary Games with Imitation Rule on Toroidal Grid. IEEE Trans. Autom. Control. 2023, 68, 8185–8192. [Google Scholar] [CrossRef]
 Zhang, B.; Dou, C.; Yue, D.; Zhang, Z.; Zhang, T. A Packet LossDependent EventTriggered CyberPhysical Cooperative Control Strategy for Islanded Microgrid. IEEE Trans. Cybern. 2021, 51, 267–282. [Google Scholar] [CrossRef] [PubMed]
 Monica, P.; Kowsalya, M.; Guerrero, J.M. Logarithmic droopbased decentralized control of parallel converters for accurate current sharing in islanded DC microgrid applications. IET Renew. Power Gener. 2021, 15, 1240–1254. [Google Scholar]
 Mohammadhassani, A.; Teymouri, A.; MehriziSani, A.; Tehrani, K. Performance Evaluation of an InverterBased Microgrid Under Cyberattacks. In Proceedings of the 2020 IEEE 15th International Conference of System of Systems Engineering (SoSE), Budapest, Hungary, 2–4 June 2020. [Google Scholar] [CrossRef]
 Liu, C.; Zhou, M.; Wu, J.; Long, C.; Kundur, D. Financially Motivated FDI on SCED in RealTime Electricity Markets: Attacks and Mitigation. IEEE Trans. Smart Grid 2019, 10, 1949–1959. [Google Scholar] [CrossRef]
 Hasankhani, A.; Hakimi, S.M. Stochastic energy management of smart microgrid with intermittent renewable energy resources in electricity market. Energy 2021, 219, 119668. [Google Scholar] [CrossRef]
 Razmi, P.; Buygi, M.O.; Esmalifalak, M. A Machine Learning Approach for Collusion Detection in Electricity Markets Based on Nash Equilibrium Theory. J. Mod. Power Syst. Clean Energy 2021, 9, 170–180. [Google Scholar] [CrossRef]
 Dou, C.; Yue, D.; Li, X.; Xue, Y. Masbased management and control strategies for integrated hybrid energy system. IEEE Trans. Ind. Inform. 2016, 12, 1332–1349. [Google Scholar] [CrossRef]
 Major, J.A. Advanced techniques for modeling terrorism risk. J. Risk Financ. 2002, 4, 15–24. [Google Scholar] [CrossRef]
 Ma, C.Y.T.; Yau, D.K.Y.; Lou, X.; Rao, N.S. Markov Game Analysis for AttackDefense of Power Networks under Possible Misinformation. IEEE Trans. Power Syst. 2012, 28, 1676–1686. [Google Scholar] [CrossRef]
 Zimmerman, R.D.; MurilloSanchez, C.E.; Thomas, R.J. MATPOWER: SteadyState Operations, Planning, and Analysis Tools for Power Systems Research and Education. IEEE Trans. Power Syst. 2011, 26, 12–19. [Google Scholar] [CrossRef]
 Foo Eddy, Y.S.; Gooi, H.B.; Chen, S.X. Multiagent system for distributed management of microgrids. IEEE Trans. Power Syst. 2015, 30, 24–34. [Google Scholar] [CrossRef]
 Hassan, M.H.; Kamel, S.; ElDabah, M.A.; Khurshaid, T.; DomínguezGarcía, J.L. Optimal Reactive Power Dispatch with TimeVarying Demand and Renewable Energy Uncertainty Using Rao3 Algorithm. IEEE Access 2021, 9, 23264–23283. [Google Scholar] [CrossRef]
RDE Solutions  Defense Strategy  With Probability  Attack Strategy  With Probability 

${p}_{A1}=0$, ${q}_{D1}=0$  ${\Omega}_{D1}$  0  ${\Omega}_{A1}$  0 
${p}_{A1}=0$, ${q}_{D1}=1$  ${\Omega}_{D2}$  1  ${\Omega}_{A1}$  0 
${p}_{A1}=1$, ${q}_{D1}=0$  ${\Omega}_{D1}$  0  ${\Omega}_{A2}$  1 
${p}_{A1}=1$, ${q}_{D1}=1$  ${\Omega}_{D2}$  1  ${\Omega}_{A2}$  1 
${p}_{A1}=\frac{{b}_{22}{b}_{21}}{{b}_{11}{b}_{21}{b}_{12}+{b}_{22}}$, ${q}_{D1}=\frac{{a}_{22}{a}_{21}}{{a}_{11}{a}_{21}{a}_{12}+{a}_{22}}$  ${\Omega}_{D1}$, ${\Omega}_{D2}$  $\begin{array}{l}(\frac{{a}_{22}{a}_{21}}{{a}_{11}{a}_{21}{a}_{12}+{a}_{22}},\\ 1\frac{{a}_{22}{a}_{21}}{{a}_{11}{a}_{21}{a}_{12}+{a}_{22}})\end{array}$  ${\Omega}_{A1}$, ${\Omega}_{A2}$  $\begin{array}{l}(\frac{{b}_{22}{b}_{21}}{{b}_{11}{b}_{21}{b}_{12}+{b}_{22}},\\ 1\frac{{b}_{22}{b}_{21}}{{b}_{11}{b}_{21}{b}_{12}+{b}_{22}})\end{array}$ 
Initial Attack Strategies  Attacked Line  Power Flow (MW)  Attack Level (%)  Initial Probability  

Without Attack  With Attack  
${\Omega}_{A1}$  2–3  54.7  27.35  50  ${p}_{A}=1$ 
${\Omega}_{A2}$  2–3, 4–5  54.7  49.3  10  ${p}_{A}=0$ 
${\Omega}_{A1}$, ${\Omega}_{A2}$  2–3, 4–5  54.7  43.9  20  ${p}_{A}=0.4$ 
${\Omega}_{A1}$, ${\Omega}_{A2}$  2–3  54.7  16.41  80  ${p}_{A}=0.6$ 
Initial Defense Strategies  Detection Mode Selection  Initial Probability  

MGs in GridConnected Mode  Load Shedding  
${\Omega}_{D1}$  with  without  ${q}_{D}=1$ 
${\Omega}_{D2}$  without  with  ${q}_{D}=0$ 
${\Omega}_{D1}$, ${\Omega}_{D2}$  with  with  ${q}_{D}=0.5$ 
${\Omega}_{D1}$, ${\Omega}_{D2}$  with  with  ${q}_{D}=0.6$ 
Initial Attack Strategies  Detection Delay  Detection Rate 

${\Omega}_{A1}$  28  85.03% 
${\Omega}_{A2}$  12  90.98% 
${\Omega}_{A1}$, ${\Omega}_{A2}$  9  92.01% 
${\Omega}_{A1}$, ${\Omega}_{A2}$  10  95.35% 
Initial Probabilities  $\mathbf{Attack}\text{}\mathbf{Payoff}\text{}\left(\mathit{\$}\right)$  $\mathbf{Defense}\text{}\mathbf{Payoff}\text{}\left(\mathit{\$}\right)$  

$\mathbf{P}\mathbf{r}\mathbf{o}\mathbf{f}\mathbf{i}{\mathbf{t}}_{\mathbf{a}\mathbf{t}\mathbf{t}}$  $\mathbf{E}{\mathbf{L}}_{\mathbf{a}\mathbf{t}\mathbf{t}}$  ${\mathit{C}}_{\mathit{D}}+{\mathit{C}}_{\mathit{G}}$  ${\mathit{C}}_{\mathit{M}}$  
Stackelberg method  ${p}_{A}=0$  ${q}_{D}=0$  29.38  6.35  27.32 + 0.85  5.06 
${p}_{A}=0$  ${q}_{D}=1$  59.82  7.31  37.01 + 20.21  12.10  
${p}_{A}=1$  ${q}_{D}=0$  67.33  13.06  32.09 + 10.50  14.88  
${p}_{A}=1$  ${q}_{D}=1$  36.48  12.45  48.53 + 1.21  9.35  
Markov Algorithm  ${p}_{A}=0$  ${q}_{D}=0$  34.41  8.25  28.44 + 0.91  5.73 
${p}_{A}=0$  ${q}_{D}=1$  63.12  8.81  39.33 + 20.45  13.00  
${p}_{A}=1$  ${q}_{D}=0$  37.52  12.50  49.44 + 1.81  10.01  
${p}_{A}=1$  ${q}_{D}=1$  69.41  13.86  33.09 + 11.23  14.90  
Proposed algorithm  ${p}_{A}=0$  ${q}_{D}=0$  33.31  7.05  27.00 + 0.71  5.01 
${p}_{A}=0$  ${q}_{D}=1$  62.05  7.02  36.52 + 19.11  11.99  
${p}_{A}=1$  ${q}_{D}=0$  36.88  10.50  46.55 + 1.05  9.12  
${p}_{A}=1$  ${q}_{D}=1$  68.22  11.12  30.09 + 9.15  12.76 
Pg2  Pg3  Pg6  Pg8  

no attack  28.01  28.92  0.04  11.93 
${p}_{A}=0$, ${q}_{D}=0$  28  33.97  0.01  7.16 
${p}_{A}=0$, ${q}_{D}=1$  28  25.32  0.01  15.93 
${p}_{A}=1$, ${q}_{D}=0$  28  40.01  0.01  1.72 
${p}_{A}=1$, ${q}_{D}=1$  28  30  0  11.94 
Initial Probabilities  Attack Payoff $\left(\$\right)$  Defense Payoff $\left(\$\right)$  

${\mathbf{Profit}}_{\mathbf{a}\mathbf{t}\mathbf{t}}$  ${\mathbf{EL}}_{\mathbf{a}\mathbf{t}\mathbf{t}}$  ${\mathit{C}}_{\mathit{D}}+{\mathit{C}}_{\mathit{G}}$  ${\mathit{C}}_{\mathit{M}}$  
Stackelberg method  ${p}_{A}=0$  ${q}_{D}=0$  217.42  40.35  200.93 + 8.62  37.21 
${p}_{A}=0$  ${q}_{D}=1$  415.31  46.52  215.66 + 140.37  76.06  
${p}_{A}=1$  ${q}_{D}=0$  288.88  79.31  274.59 + 9.33  40.15  
${p}_{A}=1$  ${q}_{D}=1$  428.51  80.92  212.11 + 70.38  80.75  
Markov Algorithm  ${p}_{A}=0$  ${q}_{D}=0$  235.78  42.62  202.11 + 9.31  38.40 
${p}_{A}=0$  ${q}_{D}=1$  438.42  48.11  227.72 + 142.15  78.11  
${p}_{A}=1$  ${q}_{D}=0$  310.53  80.56  280.30 + 10.15  42.53  
${p}_{A}=1$  ${q}_{D}=1$  451.66  83.33  223.11 + 70.56  81.15  
Proposed algorithm  ${p}_{A}=0$  ${q}_{D}=0$  230.69  30.00  189.12 + 7.78  46.35 
${p}_{A}=0$  ${q}_{D}=1$  421.83  32.45  205.38 + 130.96  78.12  
${p}_{A}=1$  ${q}_{D}=0$  300.96  68.91  260.45 + 8.89  48.49  
${p}_{A}=1$  ${q}_{D}=1$  440.72  70.83  208.30 + 68.55  89.70 
Initial Probabilities  $\mathbf{Attack}\text{}\mathbf{Payoff}\text{}\left(\mathit{\$}\right)$  $\mathbf{Defense}\text{}\mathbf{Payoff}\text{}\left(\mathit{\$}\right)$  

$\mathbf{P}\mathbf{r}\mathbf{o}\mathbf{f}\mathbf{i}{\mathbf{t}}_{\mathbf{a}\mathbf{t}\mathbf{t}}$  $\mathbf{E}{\mathbf{L}}_{\mathbf{a}\mathbf{t}\mathbf{t}}$  ${\mathit{C}}_{\mathit{D}}+{\mathit{C}}_{\mathit{G}}$  ${\mathit{C}}_{\mathit{M}}$  
IEEE14 system  ${p}_{A}=0.4$  ${q}_{D}=0.5$  28.12  5.11  24.00 + 6.15  7.12 
${p}_{A}=0.4$  ${q}_{D}=0.6$  36.33  4.89  28.30 + 23.52  13.00  
${p}_{A}=0.6$  ${q}_{D}=0.5$  29.00  8.32  44.12 + 8.35  11.54  
${p}_{A}=0.6$  ${q}_{D}=0.6$  40.15  9.06  29.11 + 15.30  14.98  
IEEE118 system  ${p}_{A}=0.4$  ${q}_{D}=0.5$  189.45  20.83  170.35 + 14.56  50.34 
${p}_{A}=0.4$  ${q}_{D}=0.6$  405.61  21.42  200.11 + 160.50  84.12  
${p}_{A}=0.6$  ${q}_{D}=0.5$  289.33  60.35  230.25 + 16.35  50.66  
${p}_{A}=0.6$  ${q}_{D}=0.6$  410.54  68.44  198.45 + 82.31  94.30 
Pg2  Pg3  Pg6  Pg8  
no attack  28.01  28.92  0.04  11.93 
${p}_{A}=0.4,{q}_{D}=0.5$  28  29.99  0.01  12.58 
${p}_{A}=0.4,{q}_{D}=0.6$  28.01  29.99  0.04  13.15 
${p}_{A}=0.6,{q}_{D}=0.5$  28  40.01  0.01  11.68 
${p}_{A}=0.6,{q}_{D}=0.6$  28.01  40.01  0.04  14.10 
Initial Probabilities  Evolutionary Stable Solutions  Final Defense Strategies 

${p}_{A}=0,{q}_{D}=0$  ${p}_{A}=0,{q}_{D}=0$  ${\Omega}_{D1}$ 
${p}_{A}=0,{q}_{D}=1$  ${p}_{A}=0,{q}_{D}=1$  ${\Omega}_{D2}$ 
${p}_{A}=1,{q}_{D}=0$  ${p}_{A}=1,{q}_{D}=0$  ${\Omega}_{D1}$ 
${p}_{A}=1,{q}_{D}=1$  ${p}_{A}=1,{q}_{D}=1$  ${\Omega}_{D2}$ 
Initial Probabilities  Evolutionary Stable Solutions  Final Defense Strategies 

${p}_{A}=0.4,{q}_{D}=0.5$  ${p}_{A}=0,{q}_{D}=0$  ${\Omega}_{D1}$ 
${p}_{A}=0.4,{q}_{D}=0.6$  ${p}_{A}=0,{q}_{D}=1$  ${\Omega}_{D2}$ 
${p}_{A}=0.6,{q}_{D}=0.5$  ${p}_{A}=1,{q}_{D}=0$  ${\Omega}_{D1}$ 
${p}_{A}=0.6,{q}_{D}=0.6$  ${p}_{A}=1,{q}_{D}=1$  ${\Omega}_{D2}$ 
Initial Probabilities  Probabilities Change  Final Defense Strategies 

${p}_{A}=0.4,{q}_{D}=0.5$  ${p}_{A}=0.5\to 0.9$  ${\Omega}_{D1}\to {\Omega}_{D2}$ 
${p}_{A}=0.4,{q}_{D}=0.6$  ${p}_{A}=0.6\to 0.2$  ${\Omega}_{D2}\to {\Omega}_{D1}$ 
${p}_{A}=0.6,{q}_{D}=0.5$  ${p}_{A}=0.5\to 0.9$  ${\Omega}_{D1}\to {\Omega}_{D2}$ 
${p}_{A}=0.6,{q}_{D}=0.6$  ${p}_{A}=0.6\to 0.2$  ${\Omega}_{D2}\to {\Omega}_{D1}$ 
Initial Probabilities  Probabilities Change  Final Defense Strategies 

${p}_{A}=0.4$ (fixed value), ${q}_{D}=0.5$  ${p}_{A}=0.4\to 0.9$  ${\Omega}_{D1}\to {\Omega}_{D2}$ 
${p}_{A}=0.4$ (fixed t value), ${q}_{D}=0.6$  ${p}_{A}=0.4\to 0.1$  ${\Omega}_{D1}\to {\Omega}_{D1}$ 
${p}_{A}=0.6$ (fixed value), ${q}_{D}=0.5$  ${p}_{A}=0.6\to 0.1$  ${\Omega}_{D2}\to {\Omega}_{D1}$ 
${p}_{A}=0.6$ (fixed value), ${q}_{D}=0.6$  ${p}_{A}=0.6\to 0.9$  ${\Omega}_{D2}\to {\Omega}_{D2}$ 
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. 
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jin, B.; Zhao, X.; Yuan, D. Attack–Defense Confrontation Analysis and Optimal Defense Strategy Selection Using Hybrid Game Theoretic Methods. Symmetry 2024, 16, 156. https://doi.org/10.3390/sym16020156
Jin B, Zhao X, Yuan D. Attack–Defense Confrontation Analysis and Optimal Defense Strategy Selection Using Hybrid Game Theoretic Methods. Symmetry. 2024; 16(2):156. https://doi.org/10.3390/sym16020156
Chicago/Turabian StyleJin, Bao, Xiaodong Zhao, and Dongmei Yuan. 2024. "Attack–Defense Confrontation Analysis and Optimal Defense Strategy Selection Using Hybrid Game Theoretic Methods" Symmetry 16, no. 2: 156. https://doi.org/10.3390/sym16020156