Pearson-ShuffleDarkNet37-SE-Fully Connected-Net for Fault Classification of the Electric System of Electric Vehicles
Abstract
:1. Introduction
- DarkNet53 network has few parameters and high efficiency, with 52 layers of convolution for multi-scale feature extraction, which speeds up the convergence of training and reduces the gradient loss by backpropagation, and greatly improves the accuracy and the generalization of the model. Inspired by the DarkNet53 network, the proposed Pearson-ShuffleDarkNet37-SE-Fully Connected-Net (PSDSEF) reduces 16 convolutional layers of the DarkNet53 network and improves the classification accuracy of the DarkNet53 network. The proposed PSDSEF is lighter in comparison to the DarkNet53 network and can reduce memory while maintaining accuracy.
- The ShuffleNet network has a simple structure and few parameters with high accuracy. After many matching experiments, the authors are pleased to find that the ShuffleNet and lightweight DarkNet53 networks can match the EV electric-system fault-classification dataset very effectively. Although the structure of the ShuffleNet network is simple, the results of the ShuffleNet network can fully compensate for the inaccurate results of the lightweight DarkNet53 network.
- Compared with the convolutional neural networks (CNNs) without adding the channel attention mechanism, the proposed PSDSEF takes the channels of the image as the inputs of the CNNs and pays attention to the locally important information brought by the channels of the image.
- To the best knowledge of the authors, this study is the first to integrate complex CNNs, lightweight CNNs, deep fully connected networks, and channel attention mechanisms for the EV electric-system fault-classification problem.
- There is no existing research utilizing CNNs to classify the electric-system faults of EVs.
2. Proposed PSDSEF
2.1. DarkNet37-SE of Proposed PSDSEF
2.1.1. SENet of DarkNet37-SE
2.1.2. DarkNet37 of DarkNet37-SE
2.2. ShuffleNet of PSDSEF
2.3. Fully Connected Layer of PSDSEF
3. Case Study
3.1. Dataset
3.2. Data Preprocessing
- Data Cleaning. Many problems may occur during data collection, such as malfunctioning of vehicle sensors, poor network transmission when uploading data, etc. Therefore, the collected data may contain outliers and missing values. Observation of the dataset reveals that the following special cases occur:
- The vehicle models collected in the dataset are of the same brand and the same model, and thus the Vehicle ID field and the data collection time field do not affect this study;
- The vehicle operating states collected in the dataset are all in the purely electric operating state, and the operating mode data fields are all “1”;
- The total number of individual batteries for the selected models is 95, and the total number of individual battery temperature probes is 34, and both fields are fixed values;
- The maximum voltage battery number, the minimum voltage battery number, the maximum temperature subsystem number, and the minimum temperature subsystem number are all “1”;
- The charging and storage device fault code list field is null, with no available data;
- There are several cases with the same value: fault codes “8192”, “32,768”, “40,960”, “57,344”, and “73,728” in the “speed” column; fault code “4096” in the “vehicle_state” column; fault codes “8192”, “32,768”, “40,960”, “57,344”, “65,536”, “73,728” in the “max_alarm_lv” column; fault codes “32,768”, “40,960”, “57,344”, “65,536”, “73,728” in the “dcdc_stat” column; fault codes “8192”, “32,768”, “40,960”, “65,536”, “73,728” in the “gear” column.
- 2.
- Data expansion. Since the number of various types of data samples in the training data have a large gap, i.e., the training samples are unbalanced, this will lead to a large deviation in the accuracy of the training results. Therefore, oversampling the data samples with a small number of sample categories and reducing the number of larger samples constitute two steps that were applied to achieve the purpose of balancing the samples simultaneously. Data with more than 1250 data volumes were randomly sampled to 1250 pieces of data. Because most of the data volume of generating operational fault data was not up to 1250, the linear difference method was utilized for data expansion; the formula for linear interpolation is:
- 3.
- Data normalization. To remove the magnitude gap present between dimensional data and retain the relationships that exist in the original data, we normalized the data. The practice max-min normalization maps the data values to the range of [0, 1], as
- 4.
- Correlation analysis. After the above processing steps, the dataset was reduced to 21 columns, including 20 columns of data utilized for feature extraction and the last column of labeled data. Too many variables can lead to the occurrence of data redundancy, which affects the effectiveness of the model. Correlation analysis is applied to derive the correlation between the data to extract the principal characteristics of the data and reduce data dimensionality and redundancy. The Pearson correlation coefficient is
- 5.
- Data sample generation. Sliding window operation is performed on the data. Every 40 rows in the same fault code are composed of sample data, such that row 1 to row 40 constitutes the first sample data; row 2 to row 41 constitutes the second sample data; by analogy, a total of 10,608 sample data are obtained as shown in Figure 12c. The data samples are converted into pictures of size 40 × 40 × 1 as inputs to the network to visualize the sample data and form the final training dataset; some of the picture samples are shown in Figure 16.
- 6.
- Dataset division. After the above preprocessing steps, a total of 10,608 image sample data are collected. In addition, the ratio of the training dataset to the test dataset is 2:1.
3.3. Evaluation Indicators
3.4. Experimental Results Analysis
3.5. Discussions
- The samples in this study are the same fault code in 40 rows constituting an input sample; there may be a large span between rows and rows of data, which leads to a large difference in the data generating a certain degree of error.
- The number of samples corresponding to some of the fault codes contained in the dataset are too few to model the extraction of their features well, such as the fault codes “2112” and “2128”, both of which have only one fault sample.
4. Conclusions
- The accuracy, Macro-Precision, Macro-Recall, and Macro-F1 score of the PSDSEF for classifying electric-system failures in EVs are higher than all comparison networks.
- In this study, the selected dataset is firstly correlated by the Pearson correlation coefficient method, which in turn reduces the dimensionality of the data; then, the maximum–minimum normalization is employed to eliminate the difference in magnitude that exists between the dimensional data and to simplify the parameters of the data, which allows the features to be extracted more efficiently.
- PSDSEF integrates DarkNet37-SE and the lightweight network ShuffleNet in order to automatically extract features utilizing convolution and pooling operations, and then the classification results of the two networks are aggregated by the fully connected neural network, which draws on the advantages of the two networks to obtain higher classification results.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lu, F.; Niu, R.; Zhang, Z.; Guo, L.; Chen, J. A Generative Adversarial Network-Based Fault Detection Approach for Photovoltaic Panel. Appl. Sci. 2022, 12, 1789. [Google Scholar] [CrossRef]
- Wang, C.; Luo, J.; Qing, F.; Tang, Y.; Wang, Y. Analysis of the Driving Force of Spatial and Temporal Differentiation of Carbon Storage in Taihang Mountains Based on InVEST Model. Appl. Sci. 2022, 12, 10662. [Google Scholar] [CrossRef]
- Alkawsi, G.; Baashar, Y.; Abbas, U.D.; Alkahtani, A.A.; Tiong, S.K. Review of Renewable Energy-Based Charging Infrastructure for Electric Vehicles. Appl. Sci. 2021, 11, 3847. [Google Scholar] [CrossRef]
- Xiao, F.; Li, C.; Fan, Y.; Yang, G.; Tang, X. State of Charge Estimation for Lithium-Ion Battery Based on Gaussian Process Regression with Deep Recurrent Kernel. Int. J. Electr. Power Energy Syst. 2021, 124, 106369. [Google Scholar] [CrossRef]
- Jiao, M.; Wang, D.; Yang, Y.; Liu, F. More Intelligent and Robust Estimation of Battery State-of-Charge with an Improved Regularized Extreme Learning Machine. Eng. Appl. Artif. Intell. 2021, 104, 104407. [Google Scholar] [CrossRef]
- Zhao, J.; Ling, H.; Wang, J.; Burke, A.F.; Lian, Y. Data-Driven Prediction of Battery Failure for Electric Vehicles. iScience 2022, 25, 104172. [Google Scholar] [CrossRef]
- Xie, Y.; Wang, X.; Hu, X.; Li, W.; Zhang, Y.; Lin, X. An Enhanced Electro-Thermal Model for EV Battery Packs Considering Current Distribution in Parallel Branches. IEEE Trans. Power Electron. 2022, 37, 1027–1043. [Google Scholar] [CrossRef]
- Lin, W.-H.; Wang, P.; Chao, K.-M.; Lin, H.-C.; Yang, Z.-Y.; Lai, Y.-H. Wind Power Forecasting with Deep Learning Networks: Time-Series Forecasting. Appl. Sci. 2021, 11, 10335. [Google Scholar] [CrossRef]
- Boulila, W.; Sellami, M.; Driss, M.; Al-Sarem, M.; Safaei, M.; Ghaleb, F.A. RS-DCNN: A Novel Distributed Convolutional-Neural-Networks Based-Approach for Big Remote-Sensing Image Classification. Comput. Electron. Agric. 2021, 182, 106014. [Google Scholar] [CrossRef]
- Arslan, M.; Kamal, K.; Sheikh, M.F.; Khan, M.A.; Ratlamwala, T.A.H.; Hussain, G.; Alkahtani, M. Tool Health Monitoring Using Airborne Acoustic Emission and Convolutional Neural Networks: A Deep Learning Approach. Appl. Sci. 2021, 11, 2734. [Google Scholar] [CrossRef]
- Ahmed, S.; Kamal, K.; Ratlamwala, T.A.H.; Mathavan, S.; Hussain, G.; Alkahtani, M.; Alsultan, M.B.M. Aerodynamic Analyses of Airfoils Using Machine Learning as an Alternative to RANS Simulation. Appl. Sci. 2022, 12, 5194. [Google Scholar] [CrossRef]
- Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef] [PubMed]
- Liu, Z.-T.; Han, M.-T.; Wu, B.-H.; Rehman, A. Speech Emotion Recognition Based on Convolutional Neural Network with Attention-Based Bidirectional Long Short-Term Memory Network and Multi-Task Learning. Appl. Acoust. 2023, 202, 109178. [Google Scholar] [CrossRef]
- Jiang, C.; Huang, K.; Zhang, S.; Wang, X.; Xiao, J.; Goulermas, Y. Aggregated Pyramid Gating Network for Human Pose Estimation without Pre-Training. Pattern Recognit. 2023, 138, 109429. [Google Scholar] [CrossRef]
- Nahiduzzaman, M.; Islam, M.R.; Hassan, R. ChestX-Ray6: Prediction of Multiple Diseases Including COVID-19 from Chest X-Ray Images Using Convolutional Neural Network. Expert Syst. Appl. 2023, 211, 118576. [Google Scholar] [CrossRef] [PubMed]
- Yin, L.; Cao, X.; Liu, D. Weighted Fully-Connected Regression Networks for One-Day-Ahead Hourly Photovoltaic Power Forecasting. Appl. Energy 2023, 332, 120527. [Google Scholar] [CrossRef]
- Cavieres, R.; Barraza, R.; Estay, D.; Bilbao, J.; Valdivia-Lefort, P. Automatic Soiling and Partial Shading Assessment on PV Modules through RGB Images Analysis. Appl. Energy 2022, 306, 117964. [Google Scholar] [CrossRef]
- Wang, J.; Liu, Y.; Nie, X.; Mo, Y.L. Deep Convolutional Neural Networks for Semantic Segmentation of Cracks. Struct. Contr Hlth 2022, 29, e2850. [Google Scholar] [CrossRef]
- Khare, M.R.; Havaldar, R.H. Predicting the Anterior Slippage of Vertebral Lumbar Spine Using Densenet-201. Biomed. Signal Process. Control 2023, 86, 105115. [Google Scholar] [CrossRef]
- Punn, N.S.; Agarwal, S. Automated Diagnosis of COVID-19 with Limited Posteroanterior Chest X-Ray Images Using Fine-Tuned Deep Neural Networks. Appl. Intell. 2021, 51, 2689–2702. [Google Scholar] [CrossRef]
- Shaheed, K.; Mao, A.; Qureshi, I.; Kumar, M.; Hussain, S.; Ullah, I.; Zhang, X. DS-CNN: A Pre-Trained Xception Model Based on Depth-Wise Separable Convolutional Neural Network for Finger Vein Recognition. Expert Syst. Appl. 2022, 191, 116288. [Google Scholar] [CrossRef]
- Cheng, M.-Y.; Sholeh, M.N.; Harsono, K. Automated Vision-Based Post-Earthquake Safety Assessment for Bridges Using STF-PointRend and EfficientNetB0. Struct. Health Monit. 2023, 147592172311687. [Google Scholar] [CrossRef]
- Yang, N.; Zhang, Z.; Yang, J.; Hong, Z.; Shi, J. A Convolutional Neural Network of GoogLeNet Applied in Mineral Prospectivity Prediction Based on Multi-Source Geoinformation. Nat. Resour. Res. 2021, 30, 3905–3923. [Google Scholar] [CrossRef]
- Rezaee, K.; Mousavirad, S.J.; Khosravi, M.R.; Moghimi, M.K.; Heidari, M. An Autonomous UAV-Assisted Distance-Aware Crowd Sensing Platform Using Deep ShuffleNet Transfer Learning. IEEE Trans. Intell. Transport. Syst. 2022, 23, 9404–9413. [Google Scholar] [CrossRef]
- Liu, F.; Liu, B.; Zhang, J.; Wan, P.; Li, B. Fault Mode Detection of a Hybrid Electric Vehicle by Using Support Vector Machine. Energy Rep. 2023, 9, 137–148. [Google Scholar] [CrossRef]
- Trivedi, M.; Kakkar, R.; Gupta, R.; Agrawal, S.; Tanwar, S.; Niculescu, V.-C.; Raboaca, M.S.; Alqahtani, F.; Saad, A.; Tolba, A. Blockchain and Deep Learning-Based Fault Detection Framework for Electric Vehicles. Mathematics 2022, 10, 3626. [Google Scholar] [CrossRef]
- Guo, H.; Meng, Q.; Cao, D.; Chen, H.; Liu, J.; Shang, B. Vehicle Trajectory Prediction Method Coupled with Ego Vehicle Motion Trend Under Dual Attention Mechanism. IEEE Trans. Instrum. Meas. 2022, 71, 1–16. [Google Scholar] [CrossRef]
- Liu, L.; Song, X.; Zhou, Z. Aircraft Engine Remaining Useful Life Estimation via a Double Attention-Based Data-Driven Architecture. Reliab. Eng. Syst. Saf. 2022, 221, 108330. [Google Scholar] [CrossRef]
- Pan, M.; Liu, A.; Yu, Y.; Wang, P.; Li, J.; Liu, Y.; Lv, S.; Zhu, H. Radar HRRP Target Recognition Model Based on a Stacked CNN–Bi-RNN With Attention Mechanism. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Qi, J.; Liu, X.; Liu, K.; Xu, F.; Guo, H.; Tian, X.; Li, M.; Bao, Z.; Li, Y. An Improved YOLOv5 Model Based on Visual Attention Mechanism: Application to Recognition of Tomato Virus Disease. Comput. Electron. Agric. 2022, 194, 106780. [Google Scholar] [CrossRef]
- Jin, X.; Xie, Y.; Wei, X.-S.; Zhao, B.-R.; Chen, Z.-M.; Tan, X. Delving Deep into Spatial Pooling for Squeeze-and-Excitation Networks. Pattern Recognit. 2022, 121, 108159. [Google Scholar] [CrossRef]
- Zhu, H.; Gu, W.; Wang, L.; Xu, Z.; Sheng, V.S. Android Malware Detection Based on Multi-Head Squeeze-and-Excitation Residual Network. Expert Syst. Appl. 2023, 212, 118705. [Google Scholar] [CrossRef]
- Xing, Z.; Zhang, Z.; Yao, X.; Qin, Y.; Jia, L. Rail Wheel Tread Defect Detection Using Improved YOLOv3. Measurement 2022, 203, 111959. [Google Scholar] [CrossRef]
- Muhammad, K.; Ullah, H.; Khan, S.; Hijji, M.; Lloret, J. Efficient Fire Segmentation for Internet-of-Things-Assisted Intelligent Transportation Systems. IEEE Trans. Intell. Transport. Syst. 2023, 24, 13141–13150. [Google Scholar] [CrossRef]
- 2022 Digital Vehicle Competition of the National Big Data Alliance of New Energy Vehicles. Available online: https://www.ncbdc.top/ (accessed on 14 November 2023).
Models | Number of Layers | Number of Connections | Number of Parameters (Millions) |
---|---|---|---|
DarkNet53 | 184 | 206 | 41.6 |
ResNet50 | 177 | 192 | 25.6 |
ResNet101 | 347 | 379 | 44.6 |
DenseNet201 | 708 | 805 | 20.0 |
NasNetLarge | 1243 | 1462 | 88.9 |
Xception | 170 | 181 | 22.9 |
EfficientNetb0 | 290 | 363 | 5.3 |
Place365GoogLeNet | 144 | 170 | 7.0 |
ShuffleNet | 172 | 187 | 1.4 |
InceptionV3 | 315 | 349 | 23.9 |
MobileNetV2 | 154 | 163 | 3.5 |
PSDSEF | 318 | 350 | 32.1 |
No. | Data Field Name | Field Definitions | Operation | Input/Output |
---|---|---|---|---|
1 | vid | Vehicle identification number | Deleted (Has no numerical sense.) | N/A |
2 | yr_modahrmn | Data acquisition time | Deleted (Has no numerical sense.) | N/A |
3 | vehicle_state | Vehicle status | Deleted (Is fixed value.) | N/A |
4 | charging_status | State of charge | Selected and used | Input |
5 | mode | Operating mode | Deleted (Is fixed value.) | N/A |
6 | speed | Vehicle speed | Deleted (Is fixed value.) | N/A |
7 | gear | Gear level | Deleted (Is fixed value.) | N/A |
8 | total_volt | Total voltage | Deleted (Has a very high correlation.) | N/A |
9 | total_current | Total current | Selected and used | Input |
10 | mileage | Cumulative mileage | Selected and used | Input |
11 | standard_soc | State of charge national standard | Selected and used | Input |
12 | mode_cell_volt | Battery voltage plurality | Selected and used | Input |
13 | mean_cell_volt | Average battery voltage | Deleted (Has a very high correlation.) | N/A |
14 | max_volt_num | Maximum voltage battery number | Deleted (Is fixed value.) | N/A |
15 | max_cell_volt | Maximum battery voltage | Selected and used | Input |
16 | max_volt_cell_id | Maximum voltage battery | Selected and used | Input |
17 | min_volt_num | Minimum voltage battery number | Deleted (Is fixed value.) | N/A |
18 | min_cell_volt | Minimum battery voltage | Selected and used | Input |
19 | min_cell_volt_id | Minimum voltage battery | Selected and used | Input |
20 | max_temp_num | Maximum temperature subsystem number | Deleted (Is fixed value.) | N/A |
21 | max_temp | Maximum temperature value | Selected and used | Input |
22 | max_temp_probe_id | Maximum temperature probe | Selected and used | Input |
23 | min_temp_num | Minimum temperature subsystem number | Deleted (Is fixed value.) | N/A |
24 | min_temp | Minimum temperature value | Selected and used | Input |
25 | min_temp_probe_id | Minimum temperature probe | Selected and used | Input |
26 | sing_temp_num | Total number of single-cell temperature probes | Deleted (Is fixed value.) | N/A |
27 | mode_cell_temp | Individual cell temperature plurality | Selected and used | Input |
28 | mean_cell_temp | Average individual cell temperature | Selected and used | Input |
29 | max_cell_temp | Maximum individual cell temperature | Selected and used | Input |
30 | min_cell_temp | Minimum individual cell temperature | Selected and used | Input |
31 | max_alarm_lv | Maximum alarm level | Deleted (Is fixed value.) | N/A |
32 | bat_fault_list | List of fault codes for chargeable energy storage units | Deleted (Appears non-numeric.) | N/A |
33 | insulate_r | Insulation resistance value | Selected and used | Input |
34 | dcdc_stat | DC−DC status | Deleted (Is fixed value.) | N/A |
35 | sing_volt_num | Total number of single cells | Deleted (Is fixed value.) | N/A |
36 | alarm_info | Universal alarm symbol | Selected and used | Output |
Labels | Operation | Number of the Training Dataset | Number of the Test Dataset |
---|---|---|---|
“0” | Select 1161 from 5,447,841 | 787 | 374 |
“16” | Select 1167 from 1386 | 791 | 376 |
“64” | Deleted for insufficient number. | - | - |
“66” | Deleted for insufficient number. | - | - |
“68” | Deleted for insufficient number. | - | - |
“2048” | Deleted for insufficient number. | - | - |
“2064” | Deleted for insufficient number. | - | - |
“2066” | Deleted for insufficient number. | - | - |
“2112” | Deleted for insufficient number. | - | - |
“2114” | Deleted for insufficient number. | - | - |
“2115” | Deleted for insufficient number. | - | - |
“2128” | Deleted for insufficient number. | - | - |
“2130” | Deleted for insufficient number. | - | - |
“2131” | Deleted for insufficient number. | - | - |
“2163” | Deleted for insufficient number. | - | - |
“4096” | Select 1020 from 3298 | 693 | 327 |
“8192” | Select 1158 from 1237 | 785 | 373 |
“16,387” | Deleted for insufficient number. | - | - |
“24,596” | Deleted for insufficient number. | - | - |
“32,768” | Expand 209 to 1170 | 793 | 377 |
“40,960” | Expand 666 to 1251 | 847 | 404 |
“49,152” | Deleted for insufficient number. | - | - |
“57,344” | Expand 672 to 1263 | 855 | 408 |
“65,536” | Select 1170 from 1835 | 793 | 377 |
“73,728” | Expand 222 to 1248 | 845 | 403 |
“98,304” | Deleted for insufficient number. | - | - |
“106,496” | Deleted for insufficient number. | - | - |
“114,688” | Deleted for insufficient number. | - | - |
“122,880” | Deleted for insufficient number. | - | - |
Parameters | Value |
---|---|
Execution environment | Central processing unit |
Optimization algorithm | Stochastic gradient descent with momentum |
Max epoch | 120 |
Mini-batch size | 128 |
Initial learning rate | 0.01 |
Shuffle | Every epoch |
L2 regularization | 100 |
Models | Accuracy (%) | Macro-P (%) | Macro-R (%) | Macro-F1 (%) | Training Time (s) | Testing Time (s) |
---|---|---|---|---|---|---|
PSDSEF | 97.22 | 97.59 | 97.38 | 97.38 | 10,680 | 1.62 |
DarkNet53 | 91.43 | 93.26 | 91.87 | 91.61 | 14,234 | 2.45 |
ResNet50 | 91.69 | 93.25 | 92.07 | 91.76 | 12,316 | 1.61 |
ResNet101 | 92.19 | 93.96 | 92.54 | 92.15 | 21,048 | 2.39 |
DenseNet201 | 89.21 | 91.85 | 89.72 | 88.41 | 18,951 | 6.44 |
NasNetLarge | 88.80 | 90.40 | 89.32 | 89.11 | 58,613 | 14.17 |
Xception | 70.96 | 77.71 | 72.40 | 70.59 | 14,984 | 2.70 |
EfficientNetB0 | 80.55 | 84.51 | 81.47 | 80.34 | 5978 | 3.63 |
Places365GoogLeNet | 90.17 | 93.09 | 90.53 | 90.36 | 3568 | 1.04 |
ShuffleNet | 86.49 | 90.19 | 87.09 | 86.15 | 4010 | 1.61 |
InceptionV3 | 65.75 | 74.44 | 67.55 | 65.77 | 13,216 | 3.33 |
MobileNetV2 | 84.97 | 88.12 | 85.47 | 84.03 | 5922 | 1.94 |
Models | Accuracy (%) | Training Time (s) | Testing Time (s) |
---|---|---|---|
DarkNet53 | 91.43 | 14,234 | 2.45 |
ShuffleNet | 86.49 | 4010 | 1.61 |
DarkNet37 (Improvement 1) | 88.36 | 10,479 | 1.58 |
ShuffleDarkNet37 | 95.73 | 10,499 | 1.59 |
DarkNet37-SE (Improvement 2) | 90.55 | 10,666 | 1.73 |
PSDSEF (Improvement 3) | 97.22 | 10,680 | 1.62 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lu, Q.; Chen, S.; Yin, L.; Ding, L. Pearson-ShuffleDarkNet37-SE-Fully Connected-Net for Fault Classification of the Electric System of Electric Vehicles. Appl. Sci. 2023, 13, 13141. https://doi.org/10.3390/app132413141
Lu Q, Chen S, Yin L, Ding L. Pearson-ShuffleDarkNet37-SE-Fully Connected-Net for Fault Classification of the Electric System of Electric Vehicles. Applied Sciences. 2023; 13(24):13141. https://doi.org/10.3390/app132413141
Chicago/Turabian StyleLu, Quan, Shan Chen, Linfei Yin, and Lu Ding. 2023. "Pearson-ShuffleDarkNet37-SE-Fully Connected-Net for Fault Classification of the Electric System of Electric Vehicles" Applied Sciences 13, no. 24: 13141. https://doi.org/10.3390/app132413141
APA StyleLu, Q., Chen, S., Yin, L., & Ding, L. (2023). Pearson-ShuffleDarkNet37-SE-Fully Connected-Net for Fault Classification of the Electric System of Electric Vehicles. Applied Sciences, 13(24), 13141. https://doi.org/10.3390/app132413141