Abstract:
Multiaccess edge computing provides an effective solution for meeting the communication and computation requirements of massive terminal access in distribution networks. Edge computing reduces communication latency and the burden on cloud servers by uploading terminal device data collected in real-time to a closer edge server for data processing and storage. However, task offloading optimization has become a challenging problem owing to the dynamic changes in the system environment. Based on reinforcement learning and collaborative caching models, a cloud-edge-end collaborative task offloading strategy that combines wireless channel conditions, bandwidth resources, and arithmetic resources of edge and cloud servers was proposed. This strategy combines the deep Q-network(DQN) algorithm with the cooperative caching model to maximize the transmission efficiency, security, and processing efficiency of the distribution network system while satisfying the communication delay and energy consumption constraints. Simulation results reveal that compared with other offloading strategies, the proposed strategy effectively reduces system delay and energy consumption in terms of the maximum task data volume, number of end devices, and maximum computing frequency of the edge server.