Abstract:
Existing power IoT task scheduling techniques are difficult to meet the low-latency and real-time requirements of tasks and do not take into account the internal dependencies between power IoT tasks. To address this problem, a deep reinforcement learning-based task offloading model for power IoT is constructed by integrating the DRLTO task offloading model and Sequence-to-Sequence neural network, using a directed acyclic graph to represent the tasks and their dependencies, and introducing the
ε-greedy exploration mechanism and prioritized experience replay to encourage exploration and improve the model training efficiency. By comparing with other task offloading algorithms, the average task processing latency of the proposed model in this paper significantly outperforms other algorithms, verifying the superiority in low-latency scheduling of power IoT-dependent tasks.