Dynamic Charging and Discharging Scheduling of Electric Vehicles Based on Deep Reinforcement Learning Algorithm
-
-
Abstract
With the rapid transformation and upgrade of the energy structure in China, new energy vehicles have been widely popularized and applied. By the end of 2024, the number of new energy vehicles in China had exceeded 30 million. However, the charging load of electric vehicle (EV) exhibits a double-peak characteristic, which will further increase uncertainty on both the generation and consumption sides of the power system, posing new challenges to the stable operation of the power grid. Vehicle-to-Grid (V2G) technology utilizes EVs as mobile energy storage units, providing critical flexibility resources to the power grid through bidirectional charging and discharging, thereby serving as an important enabler for balancing power supply and demand. As a hub connecting EVs and the power grid, aggregators face significant challenges when scheduling EV charging and discharging: the high randomness of EV arrival times and energy demands, and the time-of-use fluctuations in base electricity prices. Maximizing aggregator revenue while meeting user needs is a critical issue for the sustainable operation of the V2G model. To address this challenge, this paper proposes an EV charging and discharging scheduling algorithm based on the proximal policy optimization (PPO) algorithm. An event-driven reinforcement learning framework is designed, with a tailored policy network architecture developed using the actor-critic framework. Numerical experiment results demonstrate that the proposed scheduling optimization algorithm can effectively solve charging and discharging scheduling problems for different charging station scales and achieves better performance in comparisons with benchmark algorithms across various scenarios, thereby enhancing aggregators’ revenue.
-
-