Next Article in Journal
Explanatory Model of the Material Removal Mechanisms and Grinding Wheel Wear During Grinding of PCD with Water-Based Cooling Lubricants
Previous Article in Journal
A Review of Potential Geological Hazards and Precautions in the Mining of Submarine Natural Gas Hydrate
Previous Article in Special Issue
Linearized Power Flow Calculation of Flexible Interconnected Distribution Network Driven by Data–Physical Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

A Reinforcement Learning-Based Dynamic Network Reconfiguration Strategy Considering the Coordinated Optimization of SOPs and Traditional Switches

1
Economic and Technical Research Institute, State Grid Jilin Electric Power Co., Ltd., Changchun 130022, China
2
State Grid Jilin Electric Power Co., Ltd., Changchun 130021, China
3
School of Electrical Engineering, Northeast Electric Power University, Jilin 132012, China
*
Author to whom correspondence should be addressed.
Processes 2025, 13(6), 1670; https://doi.org/10.3390/pr13061670
Submission received: 20 April 2025 / Revised: 22 May 2025 / Accepted: 23 May 2025 / Published: 26 May 2025
(This article belongs to the Special Issue Applications of Smart Microgrids in Renewable Energy Development)

Abstract

With the growing integration of renewable sources on a large scale into modern power systems, the operation of distribution networks faces significant challenges under fluctuating renewable energy outputs. Therefore, achieving multi-objective optimization over multiple time periods, including minimizing energy losses and maximizing renewable energy utilization, has become a pressing issue. This paper proposes a Collaborative Intelligent Optimization Reconfiguration Strategy (CIORS) based on a dual-agent framework to achieve a global collaborative optimization of distribution networks in a multi-time period environment. CIORS addresses goal conflicts in multi-objective optimization by designing a collaborative reward mechanism. The discrete agent and continuous agent are responsible for optimizing the switch states within the distribution grid while coordinating the control of both active and reactive power flows through Soft Open Points (SOPs), respectively. To respond to the dynamic fluctuations of loads and renewable energy outputs, CIORS incorporates a dynamic weighting mechanism into the comprehensive reward function, allowing the flexible adjustment of the priority of each optimization objective. Furthermore, CIORS introduces a prioritized experience replay (PER) mechanism, which improves sample utilization efficiency and accelerates model convergence. Simulation results based on an actual distribution network in a specific area demonstrate that CIORS is effective under high-penetration clean energy scenarios.
Keywords: collaborative intelligent optimization reconfiguration strategy; dynamic distribution network reconfiguration; soft open point (SOP); deep reinforcement learning collaborative intelligent optimization reconfiguration strategy; dynamic distribution network reconfiguration; soft open point (SOP); deep reinforcement learning

Share and Cite

MDPI and ACS Style

Chu, Y.; Zhou, R.; Cui, Q.; Wang, Y.; Li, B.; Zhou, Y. A Reinforcement Learning-Based Dynamic Network Reconfiguration Strategy Considering the Coordinated Optimization of SOPs and Traditional Switches. Processes 2025, 13, 1670. https://doi.org/10.3390/pr13061670

AMA Style

Chu Y, Zhou R, Cui Q, Wang Y, Li B, Zhou Y. A Reinforcement Learning-Based Dynamic Network Reconfiguration Strategy Considering the Coordinated Optimization of SOPs and Traditional Switches. Processes. 2025; 13(6):1670. https://doi.org/10.3390/pr13061670

Chicago/Turabian Style

Chu, Yunfei, Rui Zhou, Qimeng Cui, Yong Wang, Boqiang Li, and Yibo Zhou. 2025. "A Reinforcement Learning-Based Dynamic Network Reconfiguration Strategy Considering the Coordinated Optimization of SOPs and Traditional Switches" Processes 13, no. 6: 1670. https://doi.org/10.3390/pr13061670

APA Style

Chu, Y., Zhou, R., Cui, Q., Wang, Y., Li, B., & Zhou, Y. (2025). A Reinforcement Learning-Based Dynamic Network Reconfiguration Strategy Considering the Coordinated Optimization of SOPs and Traditional Switches. Processes, 13(6), 1670. https://doi.org/10.3390/pr13061670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop