Traditional fault recovery strategies often suffer from long response times,poor economic efficiency,and low power supply quality.To address these issues,this paper proposes a distribution network fault recovery method based on deep reinforcement learning.First,considering the fluctuating characteristics of distributed generation output and load demand,a fault recovery model that balances both reliability and economy is established.Second,the distribution network fault recovery problem is formulated as a Markov decision process,and a dual-agent framework is designed to handle discrete and continuous operations in the action space,aiming to accelerate recovery speed,reduce training complexity,and enhance system stability.On this basis,a dual-agent soft actor-critic (DASAC) algorithm is proposed,which generates an optimal recovery strategy through efficient collaborative training between the agents.Finally,simulations are conducted on a modified PG&E 69-node system,and the results demonstrate that the proposed strategy can reliably output the optimal fault recovery plan and effectively reduce losses during the fault recovery process.