You are currently viewing a new version of our website. To view the old version click .
Energies
  • Correction
  • Open Access

13 September 2024

Correction: Yalçın, S.; Herdem, M.S. Optimizing EV Battery Management: Advanced Hybrid Reinforcement Learning Models for Efficient Charging and Discharging. Energies 2024, 17, 2883

and
1
Department of Computer Engineering, Adiyaman University, Adiyaman 02040, Turkey
2
Department of Mechanical Engineering, Adiyaman University, Adiyaman 02040, Turkey
*
Author to whom correspondence should be addressed.
In the original publication [1], two references were unintentionally omitted. Additionally, necessary citations and permissions for Figures 1B and 5 were not properly included. We outline below the specific changes made to correct these oversights:
1. 
Figure Adjustments and Permissions:
  • We replaced Figure 10 with Table 1 to clarify the data better and address potential copyright issues. This change has been documented with a new citation, now listed as a Reference 59.
  • Figure 1B has now been correctly cited as a Reference 44, and Figure 1A has been updated accordingly.
Table 1. The results of the adaptive test study are also shown in Ref. [59].
Table 1. The results of the adaptive test study are also shown in Ref. [59].
Test ParametersThe Data
on Rewards
Cycle Number
2004006008001000
Cumulative Return [-]AOF 000−246−241
SOF −223−435−753−1142−1344
R f Ω 0.0260.0780.1210.1530.178
Temperature
Violation [°C]
AOF −2.35−0.07−2.4100.01
SOF 2.334.235.877.287.52
R f Ω 0.0270.0770.1010.1460.169
Voltage Violation [V]AOF 00.060.380.170.16
SOF 0.030.420.160.240.32
R f Ω 0.0240.0680.1040.1410.174
Time [min]AOF 32.332.736.438.746.8
SOF 25.726.927.728.330.5
R f Ω 0.0280.0530.1020.1520.179
AOF: Adaptive output feedback; SOF: Static output feedback; R f Ω : Resistance.
Figure 1. (A) Actor-critic approach in Continuous State/Action spaces. (B) Lithium-ion movement during battery charging [44].
44.
Jaguemont, J.; Boulon, L.; Dube, Y. A comprehensive review of lithium-ion batteries used in hybrid and electric vehicles at cold temperatures. Appl. Energy 2016, 164, 99–114.
59.
Park, S.; Pozzi, A.; Perez, H.; Kandel, A.; Kim, G.; Choi, Y.; Joe, W.T.; Raimondo, D.M.; Moura, S. A deep reinforcement learning framework for fast charging of Li-ion batteries. IEEE TTE 2022, 8, 2770–2784.
2. 
Content Related to Figures:
  • Our paper primarily explores various Deep Reinforcement Learning (DRL) methods, including DDQN, DDPG, and SAC. Previously, Figures 5 and 10 were used solely for comparison purposes. Figure 5 is correctly cited according to Reference 43, for which we have obtained the necessary permissions.
  • As previously mentioned, Figure 10 has been replaced by Table 1 to enhance clarity, supported by the addition of Reference 59.
3. 
Textual Adjustments:
  • Minor textual adjustments have been made throughout the manuscript to reflect these changes clearly. Following the correction, all reference numbers in the manuscript have also been updated.
The authors state that the scientific conclusions are unaffected. This correction was approved by the Academic Editor. The original publication has also been updated.

Reference

  1. Yalçın, S.; Herdem, M.S. Optimizing EV Battery Management: Advanced Hybrid Reinforcement Learning Models for Efficient Charging and Discharging. Energies 2024, 17, 2883. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.