Advanced Search+
Zhang Yufan, Ai Qian, Li Zhaoyu. Intelligent Demand Response Resource Trading Using Deep Reinforcement Learning[J]. CSEE Journal of Power and Energy Systems, 2024, 10(6): 2621-2630. DOI: 10.17775/CSEEJPES.2020.05540
Citation: Zhang Yufan, Ai Qian, Li Zhaoyu. Intelligent Demand Response Resource Trading Using Deep Reinforcement Learning[J]. CSEE Journal of Power and Energy Systems, 2024, 10(6): 2621-2630. DOI: 10.17775/CSEEJPES.2020.05540

Intelligent Demand Response Resource Trading Using Deep Reinforcement Learning

  • With the liberalization of the retail market, customers can sell their demand response (DR) resources to the distribution company (Disco) through the DR aggregator (DRA). In this paper, an intelligent DR resource trading framework between Disco and DRA is proposed by exploiting the benefits of deep reinforcement learning (DRL). The hierarchical decision process of the two players is modeled as a Stackelberg game. In the game, Disco is the leader who determines the retail price while DRA is the follower who responds to it. To protect their privacy, a dueling deep Q-network (dueling DQN) is then constructed to model the bi-level Stackelberg game, such that the lower-level problem doesn’t need to reveal its detailed model to the upper-level. In the learning process, the uncertainties from the DRA’s baseline load and wind power are considered. In order to boost the robustness against the estimation error, the baseline load is discretized into symbols before being used as the input states of the dueling DQN. And to mitigate the uncertainty of wind power, the scenario-based method is introduced when designing the reward. We demonstrate that the proposed dueling DQN-based method has good performance and is more robust against uncertainties.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return