Abstract:
Artificial Intelligence (AI) algorithms have become an important method to cope with the uncertainty and complexity of the new power system. Fitting the correlation between features and problems using historical or simulation data avoids modeling and analyzing complex physical mechanisms, thereby reducing problem dimensions and improving computational efficiency. However, the black-box operation mode of AI also poses security risks. Attackers can influence the training process of the algorithm model through malicious approaches, embedding backdoors in the model, and ultimately controlling the output results of the algorithm, thereby affecting the power system operation. This article analyzes the feasibility of embedding backdoors in AI for power systems and designs a data poisoning-based backdoor attack method for power systems. Based on the difficulty of invading system nodes, a backdoor trigger is constructed to cause AI to produce erroneous discrimination for specific scenario samples. To defend against such attacks, this article designs detection schemes for backdoor attacks at the model and sample levels. Finally, the proposed attack and detection effects are tested in a case of AI-driven transient stability evaluation.