Distributed and Multi-Agent Reinforcement Learning Framework for Optimal Electric Vehicle Charging Scheduling
The increasing numbers of Electric Vehicles (EVs), require further installations of charging stations. The challenge of managing these grid-connected charging stations leads to a multi-objective optimal control problem where station profitability, user preferences, grid requirements and stability should be optimized. However, it is very challenging to determine the optimal charging/discharging EV schedule, since the controller should exploit the fluctuations in the electricity prices, the available renewable resources, the available stored energy of other vehicles and of course, cope with the uncertainty of EV arrival/departure scheduling. In addition, the growing number of connected vehicles results in complex state and action vectors, making it difficult for centralized and single-agent controllers to handle the problem. In this paper, we propose a novel multi-agent and distributed Reinforcement Learning (MARL) framework, that tackles the challenges mentioned above, producing controllers that achieve high performance levels under diverse conditions. In the proposed distributed framework, each charging spot makes its own charging/discharging decisions towards a cumulative cost reduction without sharing any type of private information addressing the problem of cost minimization and user satisfaction. The framework significantly improves the scalability and sample efficiency of the underlying Deep Deterministic Policy Gradient (DDPG) algorithm. Extensive numerical studies and simulations demonstrate the effectiveness of the proposed approach compared against Rule-Based Controllers and well-established state-of-the-art centralized RL algorithms offering performance improvement up to 25% and 20% respectively.
Christos D. Korkas, Christos Tsaknakis, Athanasios Ch. Kapoutsis, and Elias Kosmatopoulos
Engineering Applications of Artificial Intelligence