Energy Management in Electric Vehicles Using Improved Swarm Optimized Deep Reinforcement Learning Algorithm

Authors M.A. Jawale1 , A.B. Pawar2, Sachin K. Korde3, Dhananjay S. Rakshe4, P. William3, Neeta Deshpande5
Affiliations

1Department of Information Technology, Sanjivani College of Engineering, SPPU, Pune, India

2Department of Computer Engineering, Sanjivani College of Engineering, SPPU, Pune, India

3Department of Information Technology, Pravara Rural Engineering College, SPPU, Pune, India

4Department of Computer Engineering, Pravara Rural Engineering College, SPPU, Pune, India

5Department of Computer Engineering, R H Sapat College of Engineering, Management Studies and Research, SPPU, Pune, India

 

Е-mail jawale.madhu@gmail.com
Issue Volume 15, Year 2023, Number 3
Dates Received 20 May 2023; revised manuscript received 16 June 2023; published online 30 June 2023
Citation M.A. Jawale, A.B. Pawar, Sachin K. Korde, et al., J. Nano- Electron. Phys. 15 No 3, 03004 (2023)
DOI https://doi.org/10.21272/jnep.15(3).03004
PACS Number(s) 88.85.Hj
Keywords Energy management, Electric vehicles, Improved Swarm optimized Deep Reinforcement Learning Algorithm (IS-DRLA).
Annotation

The internal combustion engine-based transportation system is causing severe problems such as rising levels of pollution, rising petroleum prices, and the depletion of natural resources. To divide power between the engine and the battery in an effective manner, a sophisticated energy management system is required to be put into place. A power split strategy that is efficient may result in higher fuel economy and performance of Electric Vehicles (EVs). In this paper, we propose the reinforcement learning method using Deep Q learning (DQL), which is a novel Improved Swarm optimized Deep Reinforcement Learning Algorithm (IS-DRLA) designed for energy management control. To perform an update on the weights of the neural network, this method computes the use of a modified version of the swarm optimization technique. After that, the suggested IS-DRLA system goes through training and verification using high-precision realistic driving conditions, after which it is contrasted with the standard approach. The performance indices such as State of Charge (SOC) and fuel consumption and loss function are analyzed for the efficiency of the proposed method (IS-DRLA). According to the findings, the newly proposed IS-DRLA is capable of achieving a higher training pace with a lower overall fuel consumption than the conventional policy, and its fuel economy comes very close to matching that of the worldwide optimal. In addition to this, the adaptability of the suggested strategy is demonstrated by utilizing a different driving schedule.

List of References