Version 1
: Received: 5 April 2024 / Approved: 5 April 2024 / Online: 9 April 2024 (09:35:02 CEST)
How to cite:
Zaman, I.; He, M. User Anxiety-aware Electric Vehicle Charging Scheduling: An Episodic Deep Reinforcement Learning Approach. Preprints2024, 2024040598. https://doi.org/10.20944/preprints202404.0598.v1
Zaman, I.; He, M. User Anxiety-aware Electric Vehicle Charging Scheduling: An Episodic Deep Reinforcement Learning Approach. Preprints 2024, 2024040598. https://doi.org/10.20944/preprints202404.0598.v1
Zaman, I.; He, M. User Anxiety-aware Electric Vehicle Charging Scheduling: An Episodic Deep Reinforcement Learning Approach. Preprints2024, 2024040598. https://doi.org/10.20944/preprints202404.0598.v1
APA Style
Zaman, I., & He, M. (2024). User Anxiety-aware Electric Vehicle Charging Scheduling: An Episodic Deep Reinforcement Learning Approach. Preprints. https://doi.org/10.20944/preprints202404.0598.v1
Chicago/Turabian Style
Zaman, I. and Miao He. 2024 "User Anxiety-aware Electric Vehicle Charging Scheduling: An Episodic Deep Reinforcement Learning Approach" Preprints. https://doi.org/10.20944/preprints202404.0598.v1
Abstract
The transportation industry is rapidly transitioning from Internal Combustion Engine (ICE) based vehicles to Electric Vehicles (EVs) to promote clean energy. However, large-scale adoption of EVs can compromise the reliability of the power grids by introducing large uncertainty in the demand. Demand response with a controlled charge scheduling strategy for EVs can mitigate such issues. In this paper, a deep reinforcement learning- based charge scheduling strategy is developed for individual EVs by considering user’s dynamic driving behavior and charging preferences. The temporal dynamics of user’s anxiety about charging the EV battery is rigorously addressed. A dynamic weight allocation technique is applied to continuously tune user’s priority for charging and cost-saving with respect to charging duration. The sequential charging control problem is formulated as a Markov decision process, and an episodic approach to the deep deterministic policy gradient (DDPG) algorithm with target policy smoothing and delayed policy update techniques is applied to develop the optimal charge scheduling strategy. A real-world dataset that captures user’s driving behavior, such as arrival time, departure time, and charging duration, is utilized in this study. The extensive simulation results reveal the effectiveness of the proposed algorithm in minimizing energy cost while satisfying user’s charging requirements.
Keywords
deep deterministic policy gradient (DDPG), deep reinforcement learning, EV charge scheduling, Markov decision process (MDP)
Subject
Engineering, Electrical and Electronic Engineering
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.