Abstract:
In extreme events, it can effectively enhance the grid resilience by using surplus power of microgrids to serve the critical loads in distribution systems. Based on the deep reinforcement learning(DRL) technique, considering the participation of microgrids, a dynamic critical load restoration(DCLR) method of distribution systems is proposed to support the model-free manner to solve the complex problems, significantly improving the online computational efficiency. Firstly, the DCLR problem of distribution systems with microgrids is analyzed. On this basis, its Markov decision process(MDP) is formulated considering the complex operational constraints, including the distribution operation constraints, microgrid operation constraints, customer satisfaction degree, etc. Secondly, a DCLR simulation environment is built based on Open DSS as the agent-environment interface for applying DRL algorithms. Furthermore, a deep Q-network algorithm is adopted to search for the optimal policies of the critical load restoration. The convergence and decision-making ability indices are defined to measure the performance of the agents in training and application processes, respectively. Based on the two modified IEEE test systems, the effectiveness of the proposed method is verified.