Impact of the Convolutional Neural Network Structure and Training Parameters on the Effectiveness of the Diagnostic Systems of Modern AC Motor Drives
Abstract
:1. Introduction
2. Deep Convolutional Neural Networks
2.1. Structure of the Convolutional Neural Network
2.2. Training of the Convolutional Network
3. Methodology of Diagnostics of AC Motor Drives Using CNNs
3.1. Description of the Main Goal of the Invstigation—Research Scenarios
3.2. Presentation of the Experimental Setups
4. Analysis of the Influence of CNN Structure on the Effectiveness of IM and PMSM Diagnostic Systems
- analysis of the impact of changes in the number of CONV and FC layers on the effectiveness of CNN;
- analysis of the impact of the number of CONV filters on the precision of the CNN network;
- assessment of the impact of the number of neurons in the FC layer and the declared probability of neuron rejection on the precision of the CNN;
- influence of the activation function used: ReLU, clipped ReLU function, leaky ReLU, hyperbolic tangent;
- assessment of the influence of the rejection and pooling layers on the precision of CNN.
5. Analysis of the Impact of Training Process Parameters of CNN to the Effectiveness of IM and PMSM Diagnostic Systems
- the impact of changes in the number of training epochs on the precision of the diagnostic system of the stator windings of an IM and PMSM;
- analysis of the impact of the initial learning rate and the drop period on the course of the convolutional network training process;
- study of the influence of the momentum factor on the training process and the precision of the CNN;
- the impact of the data mini-batch size on the performance of a CNN-based diagnostic system.
6. Conclusions
- -
- Despite many similarities in the construction of the stator windings IM and PMSM motors, diagnostic systems based on CNN have different levels of precision. This phenomenon is observable during the analysis of the learning curves of the CNN training process. This fact results from the influence of the permanent magnets of the PMSM rotor, especially when working with low load torque. The influence of permanent magnets significantly limits the possibilities of direct processing of phase current signals by CNN. Therefore, when designing CNN-based detection systems, applications for IM and PMSM should be considered separately as systems with different properties.
- -
- The gradual increase in the number of convolutional layers results in an increase in the network precision index, which results from the increasing share of higher-order features in the final evaluation. However, the nature of these changes is true only for a certain range of the number of convolutional layers where the higher-order features contain useful diagnostic information.
- -
- Excessive expansion of the network structure significantly extends the training process while not improving its effectiveness; as the number of filters increases, the precision of the convection network is clearly improved, which is the result of the increased number of characteristic features for individual categories. However, as with the declaration of the number of convolutional layers, the improvement in the degree of precision due to the increase in the number of filters only takes place up to a certain level.
- -
- Increasing the number of fully interconnected layers, as well as the number of neurons in individual layers, does not significantly affect the precision of the convolutional structure. As has been shown in the research conducted, the use of only two layers ensures a high level of precision in determining the belonging of the input matrix to one of the considered classes.
- -
- The use of a rejection layer ensures that the generalization properties of the neural network are maintained, which results in an increase in the precision of the system for unknown samples. Moreover, the rejection layer should be located primarily in the places of the structure with the highest number of neural connections (at the transition between the convolution and the classification set). However, the declared probability of rejection should ensure the convergence of the training process. In most of the CNN implementation examples described, this value is 0.5.
- -
- The activation functions of the CNN network do not play a key role in the ultimate precision of the system. In the overwhelming majority of cases, deep networks use activation functions of the ReLU type or its variants. In addition, the use of sigmoidal functions makes it possible to achieve high precision in diagnostic systems. However, attention should be paid to the fact that the use of linear functions instead of sigmoid ones is aimed at simplifying the computational process of a significantly extended neural structure.
- -
- The selection of the number of learning epochs should take into account the number of iterations that ensure stabilization of the value of the loss function determined for the learning and testing data. Note that with each teaching epoch, the waveforms of the loss functions calculated for the teaching and testing data should have a similar shape. At the moment of the emergence of gradually larger differences in the waveforms (bifurcation of waveforms) or the complete lack of changes in the value of the loss function, the process should be stopped, and its parameters should be improved.
- -
- The use of the technique of periodic weakening of the learning constant eliminates to some extent the problem of precise adjustment of the initial learning constant. The cyclical reduction of the constant makes it possible to adjust the constant during the training process, which is noticeable in the form of flat fragments of the loss function. Proper selection of the number of epochs followed by the reduction of the constant (most often, the new value is 80–95% of the value before the update) enables a significant reduction of the tuning time of the learning constant. The research carried out has shown that the application of the SGDM algorithm with the declared value of the angular momentum coefficient in the range of 0.80–0.95 results in an increase in the precision of the convolutional network. Moreover, increasing the degree of participation of previous clients in the process of searching for the minimum of the objective function reduces the risk of stalling the training process resulting in a lack of a decrease in the value of the loss function; the selection of the size of the training data mini-batch should take into account the number of all cases included in the training packet. The research showed that despite the different sizes of the training packets for IM and PMSM, the highest level of precision was achieved for the minibatch size, which was about 2% of the size of the training packet.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Name of the Parameter | Symbol | Value | Units |
---|---|---|---|
Power | PN | 3000 | [W] |
Torque | TN | 19.83 | [Nm] |
Speed | NN | 1445 | [r/min] |
Stator phase voltage | UsN | 230 | [V] |
Stator current | IsN | 6.8 | [A] |
Frequency | fsN | 50 | [Hz] |
Pole pairs number | pp | 2 | [-] |
Name of the Parameter | Symbol | Value | Units |
---|---|---|---|
Power | PN | 2500 | [W] |
Torque | TN | 16 | [Nm] |
Speed | nN | 1500 | [r/min] |
Stator phase voltage | UsN | 325 | [V] |
Stator current | IsN | 6.6 | [A] |
Frequency | fsN | 100 | [Hz] |
Pole pairs number | pp | 4 | [-] |
Number of stator winding turns | Ns | 2 × 125 | [-] |
References
- Nandi, S.; Toliyat, H.A.; Li, X. Condition monitoring and fault diagnosis of electrical motors—A review. IEEE Trans. Energy Convers. 2005, 20, 719–729. [Google Scholar] [CrossRef]
- Bellini, A.; Filippetti, F.; Tassoni, C.; Capolino, G.-A. Advances in diagnostic techniques for induction machines. IEEE Trans. Ind. Electron. 2008, 55, 4109–4126. [Google Scholar] [CrossRef]
- Henao, H.; Capolino, G.A.; Fernandez-Cabanas, M.; Filippetti, F.; Bruzzese, C.; Strangas, E.; Pusca, R.; Estima, J.; Riera-Guasp, M.; Hedayati-Kia, S. Trends in Fault Diagnosis for Electrical Machines: A Review of Diagnostic Techniques. IEEE Ind. Electron. Mag. 2014, 8, 31–42. [Google Scholar] [CrossRef]
- Capolino, G.A.; Antonino-Daviu, J.A.; Riera-Guasp, M. Modern diagnostics techniques for electrical machines, power electronics, and drives. IEEE Trans. Ind. Electron. 2015, 62, 1738–1745. [Google Scholar] [CrossRef]
- Riera-Guasp, M.; Antonino-Daviu, J.A.; Capolino, G.A. Advances in electrical machine, power electronic and drive condition monitoring and fault detection: State of the art. IEEE Trans. Ind. Electron. 2015, 62, 1746–1759. [Google Scholar] [CrossRef]
- Choi, S.; Haque, M.S.; Tarek, M.T.B.; Mulpuri, V.; Duan, Y.; Das, S.; Toliyat, H.A. Fault diagnosis techniques for permanent magnet AC machine and drives—A review of current state of the art. IEEE Trans. Transp. Electrif. 2018, 4, 444–463. [Google Scholar] [CrossRef]
- Chen, Y.; Liang, S.; Li, W.; Liang, H.; Wang, C. Faults and diagnosis methods of permanent magnet synchronous motors: A review. Appl. Sci. 2019, 9, 2116. [Google Scholar] [CrossRef]
- Frosini, L. Novel Diagnostic Techniques for Rotating Electrical Machines—A Review. Energies 2020, 13, 5066. [Google Scholar] [CrossRef]
- Orlowska-Kowalska, T.; Wolkiewicz, M.; Pietrzak, P.; Skowron, M.; Ewert, P.; Tarchala, G.; Krzysztofiak, M.; Kowalski, C.T. Fault Diagnosis and Fault-Tolerant Control of PMSM Drives–State of the Art and Future Challenges. IEEE Access 2022, 10, 59979–60024. [Google Scholar] [CrossRef]
- Wen, L.; Li, X.; Gao, L.; Zhang, Y. A New Convolutional Neural Network-Based Data-Driven Fault Diagnosis Method. IEEE Trans. Ind. Electron. 2018, 65, 5990–5998. [Google Scholar] [CrossRef]
- Xu, G.; Liu, M.; Jiang, Z.; Shen, W.; Huang, C. Online Fault Diagnosis Method based on Transfer Convolutional Neural Networks. IEEE Trans. Instrum. Meas. 2020, 69, 509–520. [Google Scholar] [CrossRef]
- Li, C.; Zhang, W.; Peng, G.; Liu, S. Bearing Fault Diagnosis Using Fully-Connected Winner-Take-All Autoencoder. IEEE Access 2018, 6, 6103–6115. [Google Scholar] [CrossRef]
- Principi, E.; Rossetti, D.; Squartini, S.; Piazza, F. Unsupervised electric motor fault detection by using deep autoencoders. IEEE/CAA J. Autom. Sin. 2019, 6, 441–451. [Google Scholar] [CrossRef]
- Shao, S.; Sun, W.; Wang, P.; Gao, R.X.; Yan, R. Learning features from vibration signals for induction motor fault diagnosis. In Proceedings of the International Symposium on Flexible Automation (ISFA), Cleveland, OH, USA, 1–3 August 2016. [Google Scholar]
- Chattopadhyay, P.; Saha, N.; Delpha, C.; Sil, J. Deep Learning in Fault Diagnosis of Induction Motor Drives. In Proceedings of the Prognostics and System Health Management Conference (PHM-Chongqing), Chongqing, China, 26–28 October 2018. [Google Scholar]
- Lee, Y.O.; Jo, J.; Hwang, J. Application of deep neural network and generative adversarial network to industrial maintenance: A case study of induction motor fault detection. In Proceedings of the IEEE International Conference on Big Data (Big Data), Boston, MA, USA, 11–14 December 2017. [Google Scholar]
- Suh, S.; Lee, H.; Jo, J.; Łukowicz, P.; Lee, Y. Imbalanced Data on Bearing Fault Detection and Diagnosis. Appl. Sci. 2019, 9, 746. [Google Scholar] [CrossRef]
- Jia, F.; Lei, Y.; Guo, L.; Lin, J.; Xing, S. A neural network constructed by deep learning technique and its application to intelligent fault diagnosis of machines. Neurocomputing 2018, 272, 619–628. [Google Scholar] [CrossRef]
- Khan, T.; Alekhya, P.; Seshadrinath, J. Incipient Inter-turn Fault Diagnosis fin Induction motors using CNN and LSTM based Methods. In Proceedings of the IEEE Industry Applications Society Annual Meeting (IAS), Portland, OR, USA, 23–27 September 2018. [Google Scholar]
- Luo, Y.; Qiu, J.; Shi, C. Fault Detection of Permanent Magnet Synchronous Motor based on Deep Learning Method. In Proceedings of the 21st International Conference on Electrical Machines and Systems (ICEMS), Jeju, Korea, 7–10 October 2018. [Google Scholar]
- Hsueh, Y.-M.; Ittangihal, V.R.; Wu, W.-B.; Chang, H.-C.; Kuo, C.-C. Fault Diagnosis System for Induction Motors by CNN Using Empirical Wavelet Transform. Symmetry 2019, 11, 1212. [Google Scholar] [CrossRef]
- Wang, L.; Zhao, X.; Wu, J.; Xie, Y.; Zhang, Y. Motor Fault Diagnosis based on Short-time Fourier Transform and Convolutional Neural Network. Chin. J. Mech. Eng. 2017, 30, 1357–1368. [Google Scholar] [CrossRef]
- Ding, X.; He, Q. Energy-Fluctuated Multiscale Feature Learning with Deep ConvNet for Intelligent Spindle Bearing Fault Diagnosis. IEEE Trans. Instrum. Meas. 2017, 66, 1926–1935. [Google Scholar] [CrossRef]
- Guo, X.; Chen, L.; Shen, C. Hierarchical adaptive deep convolution neural network and its application to bearing fault diagnosis. Measurement 2016, 93, 490–502. [Google Scholar] [CrossRef]
- Ince, T.; Kiranyaz, S.; Eren, L.; Askar, M.; Gabbouj, M. Real-Time Motor Fault Detection by 1-D Convolutional Neural Networks. IEEE Trans. Ind. Electron. 2016, 63, 7067–7075. [Google Scholar] [CrossRef]
- Yang, Y.; Zheng, H.; Li, Y.; Xu, M.; Chen, Y. A fault diagnosis scheme for rotating machinery using hierarchical symbolic analysis and convolutional neural network. ISA Trans. 2019, 91, 235–252. [Google Scholar] [CrossRef]
- Pan, J.; Zi, Y.; Chen, J.; Zhou, Z.; Wang, B. LiftingNet: A Novel Deep Learning Network with Layerwise Feature Learning from Noisy Mechanical Data for Fault Classification. IEEE Trans. Ind. Electron. 2018, 65, 4973–4982. [Google Scholar] [CrossRef]
- Zhang, W.; Li, C.; Peng, G.; Chen, Y.; Zhang, Z. A deep convolutional neural network with new training methods for bearing fault diagnosis under noisy environment and different working load. Mech. Syst. Signal Process. 2018, 100, 439–453. [Google Scholar] [CrossRef]
- Shao, S.; McAleer, S.; Yan, R.; Baldi, P. Highly Accurate Machine Fault Diagnosis Using Deep Transfer Learning. IEEE Trans. Ind. Inform. 2019, 15, 2446–2455. [Google Scholar] [CrossRef]
- Liu, R.; Wang, F.; Yang, B.; Qin, S.J. Multi-scale Kernel based Residual Convolutional Neural Network for Motor Fault Diagnosis Under Non-stationary Conditions. IEEE Trans. Ind. Inform. 2019, 16, 3797–3806. [Google Scholar] [CrossRef]
- Shao, S.; Yan, R.; Lu, Y.; Wang, P.; Gao, R. DCNN-based Multi-signal Induction Motor Fault Diagnosis. IEEE Trans. Instrum. Meas. 2019, 69, 2658–2669. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Talathi, S.S. Hyper-parameter optimization of deep convolutional networks for object recognition. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015. [Google Scholar]
- Sedki, A.; Ouazar, D.; El Mazoudi, E. Evolving neural network using real coded genetic algorithm for daily rainfall–runoff forecasting. Expert Syst. Appl. 2009, 36, 4523–4527. [Google Scholar] [CrossRef]
- Liu, Z.; Liu, A.; Wang, C.; Niu, Z. Evolving neural network using real coded genetic algorithm (GA) for multispectral image classification. Future Gener. Comput. Syst. 2004, 20, 1119–1129. [Google Scholar] [CrossRef]
- Suganuma, M.; Shirakawa, S.; Nagao, T. A genetic programming approach to designing convolutional neural network architectures. In Proceedings of the ACM Genetic and Evolutionary Computation Conference, Berlin, Germany, 15–19 July 2017; pp. 497–504. [Google Scholar]
- Cui, H.; Bai, J. A new hyperparameters optimization method for convolutional neural networks. Pattern Recognit. Lett. 2019, 125, 828–834. [Google Scholar] [CrossRef]
- Yamasaki, T.; Honma, T.; Aizawa, K. Efficient optimization of convolutional neural networks using particle swarm optimization. In Proceedings of the IEEE Third International Conference on Multimedia Big Data (BigMM), Laguna Hills, CA, USA, 19–21 April 2017. [Google Scholar]
- Bengio, Y. Gradient-based optimization of hyperparameters. Neural Comput. 2000, 12, 1889–1900. [Google Scholar] [CrossRef]
- Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
- Snoek, J.; Larochelle, H.; Adams, R.P. Practical bayesian optimization of machine learning algorithms. In Proceedings of the Advances in Neural Information Processing Systems 25 (NIPS ’12), Lake Tahoe, NV, USA, 3–6 December 2012. [Google Scholar]
- Pierre, S.; Soumith, C.; Yann, L. Convolutional Neural Networks Applied to House Numbers Digit Classification. arXiv 2012, arXiv:1204.3968. [Google Scholar] [CrossRef]
- Ngiam, J.; Chen, Z.; Chia, D.; Koh, P.W.; Le, Q.V.; Ng, A.Y. Tiled convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Vancouver, BC, Canada, 6–11 December 2010. [Google Scholar]
- Scherer, D.; Muller, A.; Behnke, S. Evaluation of pooling operations in convolutional architectures for object recognition. In Proceedings of the International Conference on Artificial Neural Networks, Thessaloniki, Greece, 15–18 September 2010. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Stochastic pooling for regularization of deep convolutional neural networks. In Proceedings of the International Conference on Learning Representations (ICLR), Scottsdale, AZ, USA, 2–4 May 2013. [Google Scholar]
- Rippel, O.; Snoek, J.; Adams, R.P. Spectral representations for convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradientbased learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Deliège, A.; Istasse, M.; Kumar, A. Ordinal Pooling. In Proceedings of the 30th British Machine Vision Conference, Cardiff, UK, 9–12 September 2019. [Google Scholar]
- Wu, J.; Chen, X.-Y.; Zhang, H.; Xiong, L.-D.; Lei, H.; Deng, S.-H. Hyperparameter optimization for machine learning models based on Bayesian optimization. J. Electron. Sci. Technol. 2019, 17, 26–40. [Google Scholar]
- Zhang, G.; Wang, P.; Chen, H.; Zhang, L. Wireless indoor localization using convolutional neural network and gaussian process regression. Sensors 2019, 19, 2508. [Google Scholar] [CrossRef]
- Zhang, M.; Li, H.; Lyu, J.; Ling, S.H.; Su, S. Multi-level cnn for lung nodule classification with gaussian process assisted hyperparameter optimization. arXiv 2019, arXiv:1901.00276. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; Massachusetts Institute of Technology Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Qian, N. On the momentum term in gradient descent learning algorithms. Neural Netw. 1999, 12, 145–151. [Google Scholar] [CrossRef]
- Sutskever, I.; Martens, J.; Dahl, G.; Hinton, G. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013. [Google Scholar]
- Duchi, J.; Hazan, E.; Singer, Y. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
- Zeiler, M.D. ADADELTA: An Adaptive Learning Rate Method. arXiv 2012, arXiv:1212.5701. [Google Scholar] [CrossRef]
- Tieleman, T.; Hinton, G. RMSProp: Divide the Gradient by a Running Average of Its Recent Magnitude. Neural Netw. Mach. Learn. 2012, 4, 26–31. [Google Scholar]
- Dubey, S.R.; Chakraborty, S.; Roy, S.K.; Mukherjee, S.; Singh, S.K.; Chaudhuri, B.B. DiffGrad: An Optimization Method for Convolutional Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 4500–4511. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, L.J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Reddi, S.J.; Kale, S.; Kumar, S. On the convergence of Adam and beyond. In Proceedings of the International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Skowron, M.; Orlowska-Kowalska, T.; Kowalski, C.T. Application of simplified convolutional neural networks for initial stator winding fault detection of the PMSM drive using different raw signal data. IET Electr. Power Appl. 2021, 15, 932–946. [Google Scholar] [CrossRef]
- Skowron, M.; Orlowska-Kowalska, T.; Kowalski, C.T. Detection of permanent magnet damage of PMSM drive based on direct analysis of the stator phase currents using convolutional neural network. IEEE Trans. Ind. Electron. 2022, 69, 13665–13675. [Google Scholar] [CrossRef]
- Skowron, M.; Orlowska-Kowalska, T.; Wolkiewicz, M.; Kowalski, C.T. Convolutional Neural Network-Based Stator Current Data-Driven Incipient Stator Fault Diagnosis of Inverter-Fed Induction Motor. Energies 2020, 13, 1475. [Google Scholar] [CrossRef] [Green Version]
- Skowron, M. Application of deep learning neural networks for the diagnosis of electrical damage to the induction motor using the axial flux. Bull. Pol. Acad. Sci. Tech. Sci. 2020, 68, 1039–1048. [Google Scholar]
- Skowron, M.; Wolkiewicz, M.; Tarchała, G. Stator winding fault diagnosis of induction motor operating under the field-oriented control with convolutional neural networks. Bull. Pol. Acad. Sci. Tech. Sci. 2020, 68, 1031–1038. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015. [Google Scholar]
Applied Layers of CNN Structure | |||||||
---|---|---|---|---|---|---|---|
Convolutional | Normalization | Activation | Pooling | Dropout | Fully Connected | Precision | |
IM | 1 × 30 filters | 1 layer | 1 × ReLU | 1 × Maximum | 1 layer: p = 0.5 | 2 layers: {64-16} neurons | 87.68% |
2 × 30 filters | 2 layers | 2 × ReLU | 2 × Maximum | 95.13% | |||
3 × 30 filters | 3 layers | 3 × ReLU | 3 × Maximum | 97.21% | |||
4 × 30 filters | 4 layers | 4 × ReLU | 4 × Maximum | 99.11% | |||
5 × 30 filters | 5 layers | 5 × ReLU | 5 × Maximum | 99.11% | |||
PMSM | 1 × 50 filters | 1 layer | 1 × ReLU | 1 × Maximum | 1 layer: p = 0.5 | 2 layers: {64-4} neurons | 81.68% |
2 × 50 filters | 2 layers | 2 × ReLU | 2 × Maximum | 85.24% | |||
3 × 50 filters | 3 layers | 3 × ReLU | 3 × Maximum | 87.76% | |||
4 × 50 filters | 4 layers | 4 × ReLU | 4 × Maximum | 88.99% | |||
5 × 50 filters | 5 layers | 5 × ReLU | 5 × Maximum | 88.84% |
Number of Filters in a Successive Convolutional Layers | |||
---|---|---|---|
Induction Motor | PMSM | ||
Structure | Precision | Structure | Precision |
5-10-15 | 87.85% | 10-20-20 | 85.50% |
10-20-30 | 92.95% | 20-30-40 | 87.76% |
15-30-45 | 97.86% | 30-40-60 | 86.63% |
20-40-60 | 97.07% | 40-50-80 | 88.10% |
25-50-75 | 98.88% | 50-60-100 | 88.11% |
30-60-90 | 98.62% | 60-70-120 | 87.88% |
35-70-105 | 98.77% | 70-80-140 | 87.64% |
40-80-120 | 99.10% | 80-90-160 | 88.44% |
45-90-135 | 99.69% | 90-100-180 | 88.10% |
50-100-150 | 99.57% | 100-110-200 | 88.24% |
Induction Motor | PMSM | ||
---|---|---|---|
Structure | Precision | Structure | Precision |
1 layer—{16} neurons | 98.39% | 1 layer—{4} neurons | 88.88% |
2 layers—{32-16} neurons | 99.44% | 2 layers—{16-4} neurons | 88.26% |
3 layers—{64-32-16} neurons | 98.21% | 3 layers—{32-16-4} neurons | 87.94% |
4 layers—{128-64-32-16} neurons | 97.57% | 4 layers—{64-32-16-4} neurons | 88.56% |
5 layers—{256-128-64-32-16} neurons | 99.27% | 5 layers—{128-64-32-16-4} neurons | 87.85% |
Effectiveness of the Assessment of the Technical Condition of Stator Windings | ||||||
---|---|---|---|---|---|---|
Basic Network | Without Dropout Layer | With Dropout Layer: | Activation Function: | |||
Clipped ReLU | Leaky ReLU | Tanh | ||||
PMSM | ≈88.3% | ≈86.6% | ≈83.2% | ≈87.9% | ≈88.5% | ≈87.8% |
IM | ≈99.2% | ≈99.6% | ≈91.8% | ≈98.5% | ≈99.2% | ≈99.6% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Skowron, M.; Kowalski, C.T.; Orlowska-Kowalska, T. Impact of the Convolutional Neural Network Structure and Training Parameters on the Effectiveness of the Diagnostic Systems of Modern AC Motor Drives. Energies 2022, 15, 7008. https://doi.org/10.3390/en15197008
Skowron M, Kowalski CT, Orlowska-Kowalska T. Impact of the Convolutional Neural Network Structure and Training Parameters on the Effectiveness of the Diagnostic Systems of Modern AC Motor Drives. Energies. 2022; 15(19):7008. https://doi.org/10.3390/en15197008
Chicago/Turabian StyleSkowron, Maciej, Czeslaw T. Kowalski, and Teresa Orlowska-Kowalska. 2022. "Impact of the Convolutional Neural Network Structure and Training Parameters on the Effectiveness of the Diagnostic Systems of Modern AC Motor Drives" Energies 15, no. 19: 7008. https://doi.org/10.3390/en15197008
APA StyleSkowron, M., Kowalski, C. T., & Orlowska-Kowalska, T. (2022). Impact of the Convolutional Neural Network Structure and Training Parameters on the Effectiveness of the Diagnostic Systems of Modern AC Motor Drives. Energies, 15(19), 7008. https://doi.org/10.3390/en15197008