Temperature-Based State-of-Charge Estimation Using Neural Networks, Gradient Boosting Machine and a Jetson Nano Device for Batteries

: Lithium-ion batteries are commonly used in electric vehicles, mobile phones, and laptops because of their environmentally friendly nature, high energy density, and long lifespan. Despite these advantages, lithium-ion batteries may experience overcharging or discharging if they are not continuously monitored, leading to ﬁre and explosion risks, in cases of overcharging, and decreased capacity and lifespan, in cases of overdischarging. Another factor that can decrease the capacity of these batteries is their internal resistance, which varies with temperature. This study proposes an estimation method for the state of charge (SOC) using a neural network (NN) model that is highly applicable to the external temperatures of batteries. Data from a vehicle-driving simulator were used to collect battery data at temperatures of 25 ◦ C, 30 ◦ C, 35 ◦ C, and 40 ◦ C, including voltage, current, temperature, and time data. These data were used as inputs to generate the NN models. The NNs used to generate the model included the multilayer neural network (MNN), long short-term memory (LSTM), gated recurrent unit (GRU), and gradient boosting machine (GBM). The SOC of the battery was estimated using the model generated with a suitable temperature parameter and another model generated using all the data, regardless of the temperature parameter. The performance of the proposed method was conﬁrmed, and the SOC-estimation results demonstrated that the average absolute errors of the proposed method were superior to those of the conventional technique. In the estimation of the battery’s state of charge in real time using a Jetson Nano device, an average error of 2.26% was obtained when using the GRU-based model. This method can optimize battery performance, extend battery life, and maintain a high level of safety. It is expected to have a considerable impact on multiple environments and industries, such as electric vehicles, mobile phones, and laptops, by taking advantage of the lightweight and miniaturized form of the Jetson Nano device.


Introduction
In modern times, environmental pollution has become a major problem.Therefore, the use of environmentally friendly energy is important.Lithium-ion batteries are the most popular energy resources in energy-storage systems, electric vehicles, mobile phones, and laptops [1][2][3].They have certain advantages, such as environmental friendliness, high energy density, high-efficiency charge and discharge, and long life.However, these batteries may experience overcharging and overdischarging when they are not continuously monitored.Overcharging may cause a fire or explosion, and overdischarging may increase the internal resistance of the battery, thereby decreasing its capacity and lifespan [4,5].Furthermore, the internal resistance of the battery varies with temperature.It increases with decreases in temperature and degrades the battery's capacity [6].If the state of charge (SOC) is estimated using acquired data without considering the temperature during data acquisition, the estimation may become inaccurate.Therefore, acquiring data based on appropriate temperatures and estimating the SOC of batteries can increase SOC-estimation accuracy.The SOC is an important concept that represents the remaining capacity of a battery.When the SOC is 100% and 0%, the battery is completely charged and discharged, respectively.The accurate estimation of the SOC to prevent overcharging and overdischarging would considerably help to prevent battery damage and accidents.
The SOC-estimation method comprises two main approaches: model-based and datadriven methods [7][8][9][10][11][12][13][14][15][16][17].The model-based method involves generating a model that is suitable for the data and estimating the SOC using the generated model.This method has high accuracy but requires professional knowledge to generate a model that fits the battery characteristics.Furthermore, a significant amount of time is required for model design.However, the data-driven method does not require battery expertise because it does not involve designing models.Moreover, the data-driven method has a relatively shorter development time than the model-based method.Machine learning is an important data-driven method.
In this study, a model was generated using the temperature measured during a discharge experiment, and the SOC of the battery was estimated using the generated model.The discharge experiment was conducted using a vehicle-driving simulator that simulates the output of a real vehicle.The vehicle-driving scenario applied to the simulator was the highway-fuel-economy test (HWFET) cycle used to measure vehicle-fuel efficiency in the United States.The SOC estimation models implemented using the acquired data were multilayer neural network (MNN), long short-term memory (LSTM), gated recurrent unit (GRU), and gradient boosting machine (GBM).The MNN was used as the most basic model, and the LSTM was applied because the battery data were time-series data.Owing to the small size of the datasets acquired by the simulator, GRU was used because of its advantage for small datasets.In addition, GBM, which has high performance with structured data, was used.In GBM, the tree is trained up to the Nth tree by reducing the residuals, which improves accuracy by effectively reducing the bias of individual decision trees.The results of these models were compared.The experimental process was as follows.First, the data acquired by the vehicle-driving simulator were classified based on the measured temperature and used as inputs for the model learning.Subsequently, the SOC was estimated using the models obtained by the Jetson Nano device, and the results were transferred to the user.This study presents several contributions.First, the authors utilized the lightweight and miniaturized form of the Jetson Nano device without depending on desktop or laptop devices, which can be used in various environments.Second, the paper compares the estimation performance using various NNs.Third, the authors set up a vehicle-driving simulator to conduct actual vehicle-driving experiments.Finally, this study confirms the high accuracy of SOC-estimation performance based on temperature, which adds to its overall contribution.

Vehicle-Driving Simulator and Jetson Nano Device
A vehicle-driving simulator that simulates the actual output of a vehicle was developed to estimate the SOC of the battery.The configuration of the overall system based on the vehicle-driving simulator and Jetson Nano device is depicted in Figure 1.The simulator comprised two direct-current (DC) motors (each with a speed of 6000 rpm and a rated voltage of 12 V), one MDD3A motor driver, one DC converter, one Arduino Pro Mini, one remote-controlled car frame, four batteries, and four tyres.Table 1 lists these items and their specifications.Each battery had a nominal voltage of 3.7 V and a rated capacity of 2000 mAh.The four batteries were connected in series and adjusted using a DC converter, and the adjusted voltage was used as the input voltage for the simulator.The output of the simulator represents the Hyundai Avante Sports AD 1.6 model (Seoul, Republic of Korea) with 255/40/18 tires driving in the HWFET cycle.The HWFET is a highway-driving scenario defined by the United States Environmental Protection Agency to measure the fuel efficiency of a vehicle.The motor output simulated the thirdgear ratio of the vehicle, and the speed of the simulator was controlled by the Arduino Pro Mini and motor driver.The HWFET is shown in Figure 2. The simulator comprised two direct-current (DC) motors (each with a speed of 6000 rpm and a rated voltage of 12 V), one MDD3A motor driver, one DC converter, one Arduino Pro Mini, one remote-controlled car frame, four batteries, and four tyres.Table 1 lists these items and their specifications.Each battery had a nominal voltage of 3.7 V and a rated capacity of 2000 mAh.The four batteries were connected in series and adjusted using a DC converter, and the adjusted voltage was used as the input voltage for the simulator.The output of the simulator represents the Hyundai Avante Sports AD 1.6 model (Seoul, Republic of Korea) with 255/40/18 tires driving in the HWFET cycle.The HWFET is a highway-driving scenario defined by the United States Environmental Protection Agency to measure the fuel efficiency of a vehicle.The motor output simulated the thirdgear ratio of the vehicle, and the speed of the simulator was controlled by the Arduino Pro Mini and motor driver.The HWFET is shown in Figure 2. In this method, Jetson Nano Device is used for performance of the SOC estimation.Jetson Nano is a single-board computer developed by NVIDIA and is one of the most popular devices for ML inference.It features heterogeneous CPU-GPU architecture, small form factor, light weight, and low power consumption.Moreover, it has a comprehensive In this method, Jetson Nano Device is used for performance of the SOC estimation.Jetson Nano is a single-board computer developed by NVIDIA and is one of the most popular devices for ML inference.It features heterogeneous CPU-GPU architecture, small form factor, light weight, and low power consumption.Moreover, it has a comprehensive development environment (JetPack SDK) and libraries developed for embedded applications, deep learning, and computer vision [18,19].Table 1 shows the technical specifications of the Jetson Nano Developer Kit.In this method, Jetson Nano Device is used for performance of the SOC estimation.Jetson Nano is a single-board computer developed by NVIDIA and is one of the most popular devices for ML inference.It features heterogeneous CPU-GPU architecture, small form factor, light weight, and low power consumption.Moreover, it has a comprehensive development environment (JetPack SDK) and libraries developed for embedded applications, deep learning, and computer vision [18,19].Table 1 shows the technical specifications of the Jetson Nano Developer Kit.

Temperature-Based Battery-SOC-Estimation Method
A SOC estimation method was proposed by selecting a suitable model for the measured temperature; it is shown in Figure 3.The proposed method in this study involves data categorization based on the operating temperature of the simulator and using it to develop models.In particular, the study-generated models for 25 °C, 30 °C, 35 °C, and 40 °C were based on their respective temperature datasets, and the SOC was estimated using the most appropriate model selected according to the temperature.To implement the proposed method, the authors used a vehicle-driving simulator to collect battery data.The estimated SOC was sent to the user after using the model.The Jetson Nano device followed the same process of applying the input data to the model for SOC estimation.
The SOC is the remaining capacity of the battery and is an important measure of the battery's state [18].In this study, the Coulomb counting method was used to confirm the errors and results of the proposed SOC method.It was expressed as follows: The proposed method in this study involves data categorization based on the operating temperature of the simulator and using it to develop models.In particular, the studygenerated models for 25 • C, 30 • C, 35 • C, and 40 • C were based on their respective temperature datasets, and the SOC was estimated using the most appropriate model selected according to the temperature.To implement the proposed method, the authors used a vehicle-driving simulator to collect battery data.The estimated SOC was sent to the user after using the model.The Jetson Nano device followed the same process of applying the input data to the model for SOC estimation.
The SOC is the remaining capacity of the battery and is an important measure of the battery's state [18].In this study, the Coulomb counting method was used to confirm the errors and results of the proposed SOC method.It was expressed as follows: where SOC(0) denotes the initial measured capacity of the battery (%), C n denotes the rated capacity of the battery (Ah), I(t) denotes the current at time t (A), and SOC(t) denotes the SOC at time t (%).

Multilayer Neural Network
The MNN is a NN in which one or more hidden layers are added to a single perceptron.Because the perceptron has a problem that cannot be nonlinearly classified, the MNN can be used to solve the problem [20].The structure of the MNN is illustrated in Figure 4.

Multilayer Neural Network
The MNN is a NN in which one or more hidden layers are added to a single perceptron.Because the perceptron has a problem that cannot be nonlinearly classified, the MNN can be used to solve the problem [20].The structure of the MNN is illustrated in Figure 4.The MNN uses feedforward and backpropagation for learning, calculates the output value through feedforward, and corrects the error through backpropagation.In this study, the SOC-estimation model employed used voltage, current, temperature, and time parameters as input data and the SOC result as output data.The rectified linear unit (ReLU) function was used as the activation function in the MNN.Compared with the sigmoid function, ReLU has the advantages of nonvanishing gradient and fast convergence [21].The equation for ReLU is as follows: An Adam optimizer was used.It is a first-order gradient-based optimizer algorithm that combines momentum and root-mean-squared propagation (RMSprop); it is easy to implement and efficient because of its small amount of calculation [22].The equation for Adam is as follows: The MNN uses feedforward and backpropagation for learning, calculates the output value through feedforward, and corrects the error through backpropagation.In this study, the SOC-estimation model employed used voltage, current, temperature, and time parameters as input data and the SOC result as output data.The rectified linear unit (ReLU) function was used as the activation function in the MNN.Compared with the sigmoid function, ReLU has the advantages of nonvanishing gradient and fast convergence [21].The equation for ReLU is as follows: An Adam optimizer was used.It is a first-order gradient-based optimizer algorithm that combines momentum and root-mean-squared propagation (RMSprop); it is easy to implement and efficient because of its small amount of calculation [22].The equation for Adam is as follows: where m 0 , v 0 , and t are initialized to 0; g t denotes the gradient of the network; m t denotes the first moment vector; v t denotes the second moment vector; and β 1 and β 2 are the exponential decay rates for the moment estimates and have the following values: β 1 = 0.9 and β 2 = 0.999.At the start of the learning process, m t and v t were close to 0. Bias correction was applied to m and v to render them unbiased.Herein, w t denotes the update of the weight, and α denotes the learning rate.The value of α is 0.001, and that of ε is 10 −8 .

Long Short-Term Memory
The LSTM is a RNN in which a past output value affects a current input value [23].The RNN has advantages in predicting time-series data.However, the problem of the vanishing gradient occurs with increases in learning time.The LSTM solves the problem by adding the cell state and three gates (forget, input, and output) to the RNN.The structure of the LSTM is shown in Figure 5.
correction was applied to  � and  � to render them unbiased.Herein,   denotes the update of the weight, and α denotes the learning rate.The value of α is 0.001, and that of ε is 10 −8 .

Long Short-Term Memory
The LSTM is a RNN in which a past output value affects a current input value [23].The RNN has advantages in predicting time-series data.However, the problem of the vanishing gradient occurs with increases in learning time.The LSTM solves the problem by adding the cell state and three gates (forget, input, and output) to the RNN.The structure of the LSTM is shown in Figure 5.The equations of the LSTM are as follows: Step (1) Forget gate Step (2) Input gate Step (3) Cell-state update Step (4) Output gate The equations of the LSTM are as follows: Step (1) Forget gate Step (2) Input gate Step (3) Cell-state update Step (4) Output gate where H t−1 denotes the data of the previous cell; x t denotes the current input data; w denotes the weight; b denotes the bias; f t denotes the forget-gate value; C t denotes the value of the previous cell calculated using tanh; C t denotes the updated value of the cell state; O t denotes the value of the output gate; and H t denotes the output.For the LSTM model employed for the SOC estimation, the Adam optimizer and tanh activation function were used.Compared with the sigmoid function, tanh demonstrates better performance for the gradient-vanishing problem, and when using the ReLU in the LSTM, the data diverge as the output value of the previous cell increases.The tanh function is expressed as follows:

Gated Recurrent Unit (GRU)
The GRU is a RNN that simplifies the LSTM.The GRU improves the shortcomings associated with long learning times due to the complex structure of the LSTM and demonstrates excellent performance for small datasets [24,25].The GRU uses update and reset gates to determine the data of previous cells.The structure of the GRU is illustrated in Figure 6.

Gated Recurrent Unit (GRU)
The GRU is a RNN that simplifies the LSTM.The GRU improves the shortcomings associated with long learning times due to the complex structure of the LSTM and demonstrates excellent performance for small datasets [24,25].The GRU uses update and reset gates to determine the data of previous cells.The structure of the GRU is illustrated in Figure 6.The equations of the GRU are as follows.
Step (1) Reset gate Step (2) Update gate Step (3) Candidate hidden state Step (4) Hidden state/output where   denotes the value of the reset gate;   denotes the value of the update gate; w and u denote the weights; ℎ −1 denotes the output of the previous cell;   denotes the input data of the current cell; ℎ  � denotes the candidate value of the hidden state; and ℎ  denotes the output.The GRU represents ℎ  by selecting the necessary parts of ℎ  � and ℎ −1 .The equations of the GRU are as follows.
Step (1) Reset gate Step (2) Update gate Step (3) Candidate hidden state Step (4) Hidden state/output where r t denotes the value of the reset gate; z t denotes the value of the update gate; w and u denote the weights; h t−1 denotes the output of the previous cell; x t denotes the input data of the current cell; h t denotes the candidate value of the hidden state; and h t denotes the output.The GRU represents h t by selecting the necessary parts of h t and h t−1 .

Gradient Boosting Machine
Ensemble learning is a technique for generating multiple classifiers and deriving more accurate predictions by combining them.Rather than using one robust model, it is a method of combining several weak models to help form more accurate predictions [26][27][28][29][30].As one of the ML techniques, boosting is an algorithm that increases prediction or classification performance by sequentially combining several weak learners.The GBM regression method can be described as an ensemble of decision trees.The structure of the GBM is illustrated in Figure 7. Rather than building one tree, GBM predicts the outcome based on the regression model, which uses weak decision trees.This method of compiling decision trees to reflect residuals helps minimize errors in each of the next steps.This process should be repeated until the number of periods set by the hyperparameter is reached or the residual value is no longer reduced.The value from the second prediction tree was closer to the actual value than the first prediction tree, and the residual was lower.Therefore, prediction accuracy is improved as the utilization increases and the residual decreases.
based on the regression model, which uses weak decision trees.This method of compiling decision trees to reflect residuals helps minimize errors in each of the next steps.This process should be repeated until the number of periods set by the hyperparameter is reached or the residual value is no longer reduced.The value from the second prediction tree was closer to the actual value than the first prediction tree, and the residual was lower.Therefore, prediction accuracy is improved as the utilization increases and the residual decreases.The equations of the GBM are as follows.
where (.) is the loss function;  0 () is the initial model;   is the residual;   is the tree model of the -th learning;   is the optimal coefficient to update the model;  is the learning rate (default value is 0.1); and   is a model that has been learned n times.

Experimental Process
The experimental procedure was as follows.First, four lithium-ion batteries were fully charged with a constant voltage of 4.2 V; in particular, the batteries used in this study were lithium-ion-polymer full-cell batteries.The cathode was Li (NiCoMn)O2, and the anode was graphite.The capacity of the fully charged battery was 100% SOC.After charging, the batteries were stabilized for 1 h and then connected in series to provide a voltage of 12 V via a DC converter.Finally, the external temperature of the batteries was adjusted The equations of the GBM are as follows.
where L(.) is the loss function; F 0 (x) is the initial model; r im is the residual; g n is the tree model of the n-th learning; γ n is the optimal coefficient to update the model; v is the learning rate (default value is 0.1); and F n is a model that has been learned n times.

Experimental Process
The experimental procedure was as follows.First, four lithium-ion batteries were fully charged with a constant voltage of 4.2 V; in particular, the batteries used in this study were lithium-ion-polymer full-cell batteries.The cathode was Li (NiCoMn)O 2 , and the anode was graphite.The capacity of the fully charged battery was 100% SOC.After charging, the batteries were stabilized for 1 h and then connected in series to provide a voltage of 12 V via a DC converter.Finally, the external temperature of the batteries was adjusted using a thermostat, and the discharge experiment was conducted using the vehicle-driving simulator.
The discharge experiment was conducted until the motor of the vehicle-driving simulator stopped, and the data acquired through the experiment were defined as one cycle of the battery data.The acquired data comprised voltage, current, temperature, and time parameters.Six cycles of the battery data were used for the experiment, according to the temperature set during the operation of the vehicle driving simulator.The acquired data were then used as inputs for MNN, LSTM, and GRU, and the SOC was estimated using the generated models.The SOC-estimation model was created using Tensorflow and Keras based on Python.

MNN, LSTM, GRU, and GBM Models for SOC Prediction
In the study, the MNN model for the SOC prediction was applied first.The input parameters were obtained from the discharge experiment and comprised voltage, current, temperature, and time parameters.Five voltage values, five current values, one time value, Energies 2023, 16, 2639 9 of 17 and one temperature value were used as the inputs to the MNN.For the SOC prediction (Table 2), the MNN had a structure of 12-256-128-1 and comprised layers in the following order: one input layer, two hidden layers, and one output layer.The number of epochs was 15,000.The ReLU activation function and Adam optimizer were used.The learning was considered complete when the mean squared error (MSE) was less than 10 −6 .The structure of the MNN for the SOC prediction is shown in Figure 8. the generated models.The SOC-estimation model was created using Tensorflow and Keras based on Python.

MNN, LSTM, GRU, and GBM Models for SOC Prediction
In the study, the MNN model for the SOC prediction was applied first.The input parameters were obtained from the discharge experiment and comprised voltage, current, temperature, and time parameters.Five voltage values, five current values, one time value, and one temperature value were used as the inputs to the MNN.For the SOC prediction (Table 2), the MNN had a structure of 12-256-128-1 and comprised layers in the following order: one input layer, two hidden layers, and one output layer.The number of epochs was 15,000.The ReLU activation function and Adam optimizer were used.The learning was considered complete when the mean squared error (MSE) was less than 10 −6 .The structure of the MNN for the SOC prediction is shown in Figure 8.In this study, the LSTM model for SOC prediction was applied second.The input parameters were the same as those in the MNN model.For the SOC estimation, the LSTM had a structure of 12-256-128-64-1 and comprised layers in the following order: one input layer, three hidden layers, and one output layer.The number of epochs was 5000.The tanh activation function and Adam optimizer were used.Figure 9 shows the structure of the LSTM for SOC prediction.In this study, the LSTM model for SOC prediction was applied second.The input parameters were the same as those in the MNN model.For the SOC estimation, the LSTM had a structure of 12-256-128-64-1 and comprised layers in the following order: one input layer, three hidden layers, and one output layer.The number of epochs was 5000.The tanh activation function and Adam optimizer were used.Figure 9 shows the structure of the LSTM for SOC prediction.Next, the GRU model was used for the SOC prediction.The input parameters were the same as those in the MNN model.For the experiment, the GRU had a structure of 12-256-128-64-1 and comprised layers in the following order: one input layer, three hidden layers, and one output layer.The number of epochs was 200.The tanh activation function Next, the GRU model was used for the SOC prediction.The input parameters were the same as those in the MNN model.For the experiment, the GRU had a structure of 12-256-128-64-1 and comprised layers in the following order: one input layer, three hidden layers, and one output layer.The number of epochs was 200.The tanh activation function and Adam optimizer were used.Figure 10 shows the structure of the GRU for SOC prediction.Next, the GRU model was used for the SOC prediction.The input parameters were the same as those in the MNN model.For the experiment, the GRU had a structure of 12-256-128-64-1 and comprised layers in the following order: one input layer, three hidden layers, and one output layer.The number of epochs was 200.The tanh activation function and Adam optimizer were used.In Figures 8-10, t denotes the time data, V denotes the voltage data, I denotes the current data, and T denotes the temperature data.
Finally, the estimation of the SOC using the GBM model was performed.The input parameters are same as those in the MNN model.For the experiment, the number of decision trees was 200, the trees' maximum depth was 5, the learning rate was 0.2, and another parameter was the default value.Note that 80% of the data were used as training data, and 20% were used as test data.The train_test_split() function of the sklearn library was used.The structure of the GBM for the SOC prediction is shown in Figure 11.In Figures 8-10, t denotes the time data, V denotes the voltage data, I denotes the current data, and T denotes the temperature data.
Finally, the estimation of the SOC using the GBM model was performed.The input parameters are same as those in the MNN model.For the experiment, the number of decision trees was 200, the trees' maximum depth was 5, the learning rate was 0.2, and another parameter was the default value.Note that 80% of the data were used as training data, and 20% were used as test data.The train_test_split() function of the sklearn library was used.The structure of the GBM for the SOC prediction is shown in Figure 11.The gradient boosting regression is a supervised learning algorithm that uses residuals to overcome the weaknesses of previous models and generate new models by linearly combining them.The gradient boost starts with a single leaf.Moreover, the target estimated value predicted by the single-leaf model is the average of all the target values.The difference between the predicted value in the single leaf and the actual value is called a pseudo residual.Gradient boosting may cause overfitting; to prevent this, it is necessary to multiply by the learning rate.The learning rate is a hyperparameter that helps to ensure high accuracy in GBM learning processes.The gradient boosting regression is a supervised learning algorithm that uses residuals to overcome the weaknesses of previous models and generate new models by linearly combining them.The gradient boost starts with a single leaf.Moreover, the target estimated value predicted by the single-leaf model is the average of all the target values.The difference between the predicted value in the single leaf and the actual value is called a pseudo residual.Gradient boosting may cause overfitting; to prevent this, it is necessary to multiply by the learning rate.The learning rate is a hyperparameter that helps to ensure high accuracy in GBM learning processes.

Experimental Results
The SOC was estimated using the suitable MNN, LSTM, GRU, and GBM models according to the temperature measured during the discharge experiment.The SOC was estimated using the conventional method for comparison and to confirm the performance of the proposed method.For SOC estimation, the conventional method selects one model generated using all the measured parameters.In Figure 12A-D, we present the SOCestimation results obtained for the MNN, LSTM, GRU, and GBM models.Each figure shows the results obtained using the proposed, conventional, and Coulomb counting methods.Tables 3-6 are the SOC errors obtained using each model.The estimated error was calculated using the mean absolute error (MAE), as follows: where n denotes the total number of parameters; x i denotes the target value; and x denotes an estimate.Table 6 shows the SOC errors obtained using the MNN model generated by the proposed and conventional methods.The proposed method achieved minimum and maximum errors of 0.98% and 3.89%, respectively.The best average error, according to the temperature, was 3.89%, obtained from the model at 40 °C.The total average error of the proposed method was 2.17% and that of the conventional method was 4.46%.
Table 7 represents the SOC errors obtained using the LSTM model generated by the proposed and conventional methods.The proposed method achieved minimum and minimum errors of 0.93% and 3.31%, respectively.The best average error in terms of temperature was 1.82%, obtained from the model at 40 °C.The total average error of the proposed method was 2.19% and that of the conventional method was 4.60%.Table 6 shows the SOC errors obtained using the MNN model generated by the proposed and conventional methods.The proposed method achieved minimum and maximum errors of 0.98% and 3.89%, respectively.The best average error, according to the temperature, was 3.89%, obtained from the model at 40 • C. The total average error of the proposed method was 2.17% and that of the conventional method was 4.46%.
Table 7 represents the SOC errors using the LSTM model generated by the proposed and conventional methods.The proposed method achieved minimum and minimum errors of 0.93% and 3.31%, respectively.The best average error in terms of temperature was 1.82%, obtained from the model at 40 • C. The total average error of the proposed method was 2.19% and that of the conventional method was 4.60%.Table 8 presents the SOC errors obtained using the GRU model generated by the proposed and conventional methods.The proposed method achieved minimum and maximum errors of 1.43% 2.96%, respectively.The best average error in terms of temperature was 1.43%, obtained from the model at 30 • C. Table 5 summarizes the average battery errors obtained using the generated models.The average errors of the proposed and conventional methods were 2.13% and 4.40%, respectively.Table 9 presents the SOC errors obtained using the GBM model generated by the proposed and conventional methods.The proposed method achieved minimum and maximum errors of 1.27% and 2.61%, respectively.The best average error in terms of temperature was 1.67%, obtained from the model at 30 • C. Table 6 summarizes the average battery errors obtained using the generated models.The average error of the proposed method was lower by more than 2.46% than that of the conventional method for the all models.Figure 13 and Table 10 show the average MAEs, which were calculated based on the temperature and used to determine the average errors.It was confirmed that the proposed method outperformed the conventional method.Online SOC Estimation by GRU The SOC was estimated in real time using the data and expressed using graphs.Table 12 represents the real-time SOC-estimation error obtained using the Jetson Nano device and the vehicle-driving simulator; the real-time graphs of the SOC are shown in Figure 14. the input parameters were obtained using the vehicle-driving simulator, and the acquired data were then transferred to the Jetson Nano device as the input of the generated model for the SOC prediction.

Conclusions
This study developed a method for estimating the SOC by selecting a suitable model according to the temperatures measured during an experiment.For the SOC estimation, a discharge experiment was conducted using a custom vehicle-driving simulator.The data acquired during the experiment were classified according to temperature and used as inputs for the MNN, LSTM, and GRU models.Finally, the SOC was estimated using the model generated from the data by the Jetson Nano device.
During the experiment, four temperatures were measured, and the SOC was estimated using the MNN, LSTM, and GRU models, according to temperature.Most of the proposed methods exhibited fewer errors than the conventional methods.The proposed MNN method achieved an average error of 2.17%, which was superior to the 4.46% obtained by using the conventional MNN method.The LSTM method achieved an average error of 2.19%, which was better than that obtained using the conventional MNN method (4.46%).The proposed GRU method demonstrated an average error of 2.15%, which was better than that obtained using the conventional GRU method (4.40%).The GBM method achieved an average error of 1.93%, which was better than that obtained by using the con-

Conclusions
This study developed a method for estimating the SOC by selecting a suitable model according to the temperatures measured during an experiment.For the SOC estimation, a discharge experiment was conducted using a custom vehicle-driving simulator.The data acquired during the experiment were classified according to temperature and used as inputs for the MNN, LSTM, and GRU models.Finally, the SOC was estimated using the model generated from the data by the Jetson Nano device.
During the experiment, four temperatures were measured, and the SOC was estimated using the MNN, LSTM, and GRU models, according to temperature.Most of the proposed methods exhibited fewer errors than the conventional methods.The proposed MNN method achieved an average error of 2.17%, which was superior to the 4.46% obtained by using the conventional MNN method.The LSTM method achieved an average error of

Figure 1 .
Figure 1.Configuration of the overall system based on the vehicle-driving simulator and Jetson Nano device.

Figure 1 .
Figure 1.Configuration of the overall system based on the vehicle-driving simulator and Jetson Nano device.

3 .
Proposed Temperature-Based SOC-Estimation Method 3.1.Temperature-Based Battery-SOC-Estimation Method A SOC estimation method was proposed by selecting a suitable model for the measured temperature; it is shown in Figure 3.

Figure 6 .
Figure 6.Structure of the GRU.

Figure 6 .
Figure 6.Structure of the GRU.

Figure 8 .
Figure 8. Structure of the MNN for SOC estimation.

Figure 8 .
Figure 8. Structure of the MNN for SOC estimation.

Figure 9 .
Figure 9. Structure of the LSTM for SOC estimation.

Figure 9 .
Figure 9. Structure of the LSTM for SOC estimation.
Figure shows the structure of the GRU for SOC prediction.

Figure 10 .
Figure 10.Structure of the GRU for SOC estimation.

Figure 10 .
Figure 10.Structure of the GRU for SOC estimation.

Figure 11 .
Figure 11.Structure of the GBM for SOC estimation.

Figure 12 .
Figure 12. (A)The SOC-estimation results for the MNN using the proposed and conventional methods; (B) the SOC-estimation results for the LSTM using the proposed and conventional methods; (C) the SOC-estimation results for the GRU using the proposed and conventional methods; (D) the SOC-estimation results for the GBM using the proposed and conventional methods.

Figure 13 .
Figure 13.The MAE obtained using the proposed and conventional methods.

Figure 13 .
Figure 13.The MAE obtained using the proposed and conventional methods.

Figure 14 .
Figure 14.Real-time SOC estimation using the GRU model.

Figure 14 .
Figure 14.Real-time SOC estimation using the GRU model.

Table 1 .
Specifications of items used for simulation and Jetson Nano Developer kit.

Table 1 .
Specifications of items used for simulation and Jetson Nano Developer kit.

Table 2 .
Hyper parameters of the MNN for SOC estimation.

Table 2 .
Hyper parameters of the MNN for SOC estimation.

Table 5 .
Hyperparameters of the GBM for SOC estimation.

Table 6 .
The SOC errors obtained using the MNN model generated by the proposed and conventional methods.

Table 7 .
The SOC errors obtained using the LSTM model generated by the proposed and conventional methods.

Table 3 .
Hyperparameters of the LSTM for SOC estimation.

Table 4 .
Hyperparameters of the GRU for SOC estimation.

Table 5 .
Hyperparameters of the GBM for SOC estimation.

Table 6 .
The SOC errors obtained using the MNN model generated by the proposed and conventional methods.

Table 7 .
The SOC errors obtained using the LSTM model generated by the proposed and conventional methods.

Table 8 .
The SOC errors obtained using the GRU model generated by the proposed method.

Table 9 .
The SOC errors obtained using the GBM model generated by the proposed method.

Table 10 .
Average battery errors produced using the generated models.

Table 11
presents the time taken to estimate the SOC using MNN, RNN, GRU and GBM.The experimental results demonstrate that the GBM outperformed the other models in terms of training duration, with a time of 00:00:01.48.However, the LSTM model had the longest training duration, of 2:09:10.01, with the MNN and GRU models taking

Table 10 .
Average battery errors produced using the generated models.

Table 11
presents the time taken to estimate the SOC using MNN, RNN, GRU and GBM.The experimental results demonstrate that the GBM outperformed the other models in terms of training duration, with a time of 00:00:01.48.However, the LSTM model had the longest training duration, of 2:09:10.01, with the MNN and GRU models taking 00:13:31.05and 00:01:16.30,respectively.

Table 12 .
Real-time SOC-estimation errors using the GRU model.