Wang Huanjiang, Xie Yong. Deep Reinforcement Learning for the Dynamic Vehicle Routing Problem with Dynamic RequestsJ. Industrial Engineering Journal. DOI: 10.3969/j.issn.1007-7375.250223
    Citation: Wang Huanjiang, Xie Yong. Deep Reinforcement Learning for the Dynamic Vehicle Routing Problem with Dynamic RequestsJ. Industrial Engineering Journal. DOI: 10.3969/j.issn.1007-7375.250223

    Deep Reinforcement Learning for the Dynamic Vehicle Routing Problem with Dynamic Requests

    • For the dynamic vehicle routing problem in which customer requests arrive randomly during the delivery process, a dynamic- requests vehicle routing optimization model is established with the objective of minimizing the total travel distance, and the problem is formulated as a Markov decision process for solution. An attention-guided iterative encoding deep reinforcement learning method, denoted as AGIE-DRL, is proposed. The encoder is improved by introducing a gating layer and multi-head attention to enhance the representation and aggregation of dynamic state features. Furthermore, a situation-aware decoder for delivery scenarios is constructed to dynamically generate feasible solutions based on visited nodes, temporal information, and remaining vehicle capacity. In addition, a training strategy combining proximal policy optimization and rollout is adopted to improve the convergence speed and training stability of the solution algorithm. Simulation results show that, under scenarios with degrees of dynamism ranging from 15% to 75%, the proposed method achieves shorter average travel distances and higher computational efficiency than Attention, hybrid PSO, and ALNS, while maintaining small performance deviations in cross-dynamism tests, thereby demonstrating good solution performance, generalization ability, and adaptability to dynamic environments.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return