Abstract:
Alongside the wide proliferation of distributed energy resources at the residential sector, how to meet the needs of realtime autonomous energy management while considering the heterogeneous operating characteristics of these resources so as to maximize the utility for residential end-users deserves significant research attention. In this area, conventional model-based optimization methods are generally burdened with inaccurate system modeling and inability to efficiently deal with uncertainties stemmed from multiple sources. In order to address these challenges, this paper proposes a model-free method based on deep reinforcement learning to achieve real-time autonomous energy management optimization. First, the user’s resources are classified into different categories, their operating characteristics are then described using a unified 3-element tuple, and the associated energy management actions are also identified. Next, the long short-term memory neural network is employed to extract the future trends of multi-source sequential data from the environment states. Then, based on the proximal policy optimization algorithm,it enables efficient learning of the optimal energy management policies in the multi-dimensional continuous-discrete mixed action space, which can adaptively adjust to system uncertainties towards the user’s electricity cost minimization objective. Finally, the effectiveness of the proposed method is verified by benchmarking its performance against several existing methods through case studies on an actual scenario.