Abstract:
Urban emergency logistics systems face multiple challenges during sudden disaster scenarios, including dynamic demand evolution, real-time fluctuations in road network accessibility, and limited inventory resources. Traditional static optimization methods struggle to effectively address environmental uncertainties. To tackle these issues, this paper proposes a digital twin-driven dynamic optimization approach for emergency distribution and constructs an integrated “digital twin - optimization model - reinforcement learning” decision-making framework. The proposed method comprises three core components: (1) an urban emergency logistics digital twin system architecture that achieves real-time perception and virtual mapping of physical system states through multi-source data fusion; (2) a multi-period emergency distribution scheduling model considering dynamic inventory constraints, which establishes cross-period coupling relationships between distribution decisions and warehousing status through inventory balance equations, with the objective of minimizing weighted response time; and (3) an adaptive decision-making algorithm based on Deep Q-Network (DQN) that reduces solution complexity through state feature selection and action discretization strategies, enabling real-time plan adjustment in dynamic environments. An empirical study is conducted based on the “7·20” extreme rainstorm disaster in Zhengzhou. The results demonstrate that: compared with sequential decision-making methods, the proposed dynamic optimization model reduces weighted response time by 21.9%; the digital twin-based real-time decision algorithm maintains 92.3% solution feasibility in dynamic environments; and the DQN algorithm achieves a 23.2-fold speedup in computational efficiency compared to exact solution methods, with acceleration ratios reaching up to 45.1-fold as problem scale increases. The research findings provide theoretical support and methodological reference for intelligent management of urban emergency logistics.