Abstract:
The methods of data-driven artificial intelligence have shown considerable superiority in the practical application of power system operation and maintenance. However, at present, the artificial intelligence technology has not been widely applied in power systems. One of the key reasons is the insufficient ability of power computing equipment for supporting the AI (artificial intelligence) models. On one hand, the limited resources on the power edge and end devices result in the insufficient computing power commonly existing, which makes the deployment and operation of the complex power AI models unpractical. On the other hand, the expansion and increase of the complexity of the power systems, the power cloud computing centers have to process the PB level of mass data and carry out a large scale power dispatching calculations. The computing equipment appears "not being able to calculate", which makes it difficult to meet the rapid response of the power system and the rising energy consumption. Compute-in-memory technology, a new computing paradigm that directly uses the memory for data processing, can realize the high computing power with low power consumption, providing a new path for solving the problems raised in new power systems. This article summarizes the mainstream researches of compute-in-memory technology in detail, and explains the feasibility of the application of compute-in-memory technology in the power grids. It also puts forward some potential power application scenarios, and analyzes the challenges that may be faced with in the actual application. The aim of this article is to clarify the focus and direction of the application of compute-in-memory technology in the power grids.