Enhancement of Electromagnetic Scattering Computation Acceleration Using LSTM Neural Networks
Abstract
:1. Introduction
2. Literature Review
3. Research Ideas and Processes
3.1. Research Ideas
3.2. Mathematical Model of LSTM Neural Networks
3.3. Based on Data-Driven EM-LSTMNN Establishment
3.4. Research Process
3.5. Comparison of Neural Network Computing Power of Different Structures
3.6. Optimization of the EM-LSTMNN Structure
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Guo, R.; Shan, T.; Song, X.; Li, M.; Yang, F.; Xu, S.; Abubakar, A. Physics Embedded Deep Neural Network for Solving Volume Integral Equation: 2-D Case. IEEE Trans. Antennas Propag. 2022, 70, 6135–6147. [Google Scholar] [CrossRef]
- Wei, Z.; Chen, X. Physics-Inspired Convolutional Neural Network for Solving Full-Wave Inverse Scattering Problems. IEEE Trans. Antennas Propag. 2019, 67, 6138–6148. [Google Scholar] [CrossRef]
- Qi, Y.; Yu, C.; Dai, X. Research on Radar Plot Classification Based on Fully Connected Neural Network. In Proceedings of the 2019 3rd International Conference on Electronic Information Technology and Computer Engineering (EITCE), Xiamen, China, 18–20 October 2019; pp. 698–703. [Google Scholar] [CrossRef]
- Smith, J.W.; Furxhi, O.; Torlak, M. An FCNN-Based Super-Resolution Mmwave Radar Framework for Contactless Musical Instrument Interface. IEEE Trans. Multimed. 2022, 24, 2315–2328. [Google Scholar] [CrossRef]
- Yao, H.M.; Jiang, L. Enhanced PML Based on the Long Short Term Memory Network for the FDTD Method. IEEE Access 2020, 8, 21028–21035. [Google Scholar] [CrossRef]
- Sima, W.; Zhang, H.; Yang, M.; Li, X. Diagnosis of small-sample measured electromagnetic transients in power system using DRN-LSTM and data augmentation. Int. J. Elect. Power Energy Syst. 2022, 137, 107820. [Google Scholar] [CrossRef]
- Wu, F.; Fan, M.; Liu, W.; Liang, B.; Zhou, Y. An Efficient Time-Domain Electromagnetic Algorithm Based on LSTM Neural Network. IEEE Antennas Wirel. Propag. Lett. 2021, 20, 1322–1326. [Google Scholar] [CrossRef]
- Choe, J.; Lee, S.; Shim, H. Attention-Based Dropout Layer for Weakly Supervised Single Object Localization and Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 4256–4271. [Google Scholar] [CrossRef] [PubMed]
- Dong, X.; Zheng, L.; Ma, F.; Yang, Y.; Meng, D. Few-Example Object Detection with Model Communication. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1641–1654. [Google Scholar] [CrossRef] [PubMed]
- Sheng, H.; Yang, F.; Jiang, H.; Yang, G.; Wang, D.; Zhang, N. Statistical analysis of infrared thermogram for CNN-based electrical equipment identification methods. Appl. Artif. Intell. 2021, 36, 2004348. [Google Scholar]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local Neural Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7794–7803. [Google Scholar] [CrossRef]
- Zhang, X.; Wan, J.; Liu, Z.; Xu, F. RCS Optimization of Surface Geometry with Physics Inspired Neural Networks. IEEE J. Multiscale Multiphys. Comput. Tech. 2022, 7, 126–134. [Google Scholar] [CrossRef]
- Weng, R.; Sun, D.; Yang, W.; Chen, X.; Lu, W. Efficient Broadband Monostatic RCS Computation of Morphing S-Shape Cavity Using Artificial Neural Networks. IEEE Antennas Wirel. Propag. Lett. 2023, 22, 263–267. [Google Scholar] [CrossRef]
- Collins, A.M.; O’Dea, A.; Brodie, K.L.; Bak, A.S.; Hesser, T.J.; Spore, N.J.; Farthing, M.W. Automated Extraction of a Depth-Defined Wave Runup Time Series from Lidar Data Using Deep Learning. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5700913. [Google Scholar] [CrossRef]
- Yuan, D.; Xie, X.; Gao, G.; Xiao, J. Advances in Hyperspectral Image Classification with a Bottleneck Attention Mechanism Based on 3D-FCNN Model and Imaging Spectrometer Sensor. J. Sens. 2022, 2022, 7587157. [Google Scholar] [CrossRef]
- Wallentine, S.; Jost, R.J.; Reddy, C.J.; Cannon, J.M. RCS Simulations and Measurements of an Autonomous Spherical RCS Calibration Device. In Proceedings of the 2022 Antenna Measurement Techniques Association Symposium (AMTA), Denver, CO, USA, 9–14 October 2022; pp. 1–5. [Google Scholar] [CrossRef]
- IHF-Boat. A Measurement Standard for Polarimetric RCS Imaging. Available online: https://www.ihf.rwth-aachen.de/en/research/research-topics/further-research-topics/rcs-measurements/ihf-boat (accessed on 7 September 2013).
- Li, X.-Q.; Song, L.-K.; Bai, G.-C.; Li, D.-G. Physics-informed distributed modeling for CCF reliability evaluation of aeroengine rotor systems. Int. J. Fatigue 2023, 167, 107342. [Google Scholar] [CrossRef]
- Li, X.-Q.; Song, L.-K.; Choy, Y.-S.; Bai, G.-C. Multivariate ensembles-based hierarchical linkage strategy for system reliability evaluation of aeroengine cooling blades. Aerosp. Sci. Technol. 2023, 138, 108325. [Google Scholar] [CrossRef]
- Li, S.; Lai, S.-J.; Liu, J.; Wu, S.; Chen, L. A MLP Based FDTD Method. J. Comput. Chem. 2020, 8, 279–284. [Google Scholar] [CrossRef]
- Pons, A.; Somolinos, Á.; González, I.; Cátedra, F. Fast Computation by MLFMM-FFT with NURBS in Large Volumetric Dielectric Structures. Electronics 2021, 10, 1560. [Google Scholar] [CrossRef]
- Massa, A.; Marcantonio, D.; Chen, X.; Li, M.; Salucci, M. DNNs as Applied to Electromagnetics, Antennas, and Propagation—A Review. IEEE Antennas Wirel. Propag. Lett. 2019, 18, 2225–2229. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015. [Google Scholar]
- Ma, L.; Hu, C.; Cheng, F. State of Charge and State of Energy Estimation for Lithium-Ion Batteries Based on a Long Short-Term Memory Neural Network. J. Energy Storage 2021, 37, 102440. [Google Scholar] [CrossRef]
- Kaloev, M.; Krastev, G. Comparative Analysis of Activation Functions Used in the Hidden Layers of Deep Neural Networks. In Proceedings of the 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 11–13 June 2021; pp. 1–5. [Google Scholar] [CrossRef]
- Stursa, D.; Dolezel, P. Comparison of ReLU and linear saturated activation functions in neural network for universal approximation. In Proceedings of the 2019 22nd International Conference on Process Control (PC19), Strbske Pleso, Slovakia, 11–14 June 2019; pp. 146–151. [Google Scholar] [CrossRef]
Parameter | Function |
---|---|
activate function | |
weights matrix for forget | |
weights matrix for input | |
weights matrix for output | |
bias factor matrix for forget | |
bias factor matrix for input | |
bias factor matrix for output | |
the input layer state at the moment t | |
the hidden layer state at the moment t | |
the memory cell state at the moment t | |
electrical large-sized target mesh data input at the moment t | |
RCS value for electrically large-sized target |
Details of Dataset | Value |
---|---|
Numbers of grid data in dataset | 400 |
Length of an RCS vector in dataset | 360 |
Dataset cost memory | 16,230 KB |
The Type of Neural Network | Loss | Cost Time (s) |
---|---|---|
GRU | 10.01895332 | 146.0487256 |
LSTM | 8.144788742 | 105.9673052 |
FCNN | 9.949327469 | 105.3547854 |
EM-FCNN | 12.1204 | 519.0594008 |
Activate Function | The Layers’ Number of Hidden Layers | Number of Neurons in Hidden Layers | Learning Rate | Loss | Cost Time(s) |
---|---|---|---|---|---|
selu | 3 | 512 | 0.0005 | 8.144788742 | 105.9673052 |
selu | 4 | 512 | 0.0003 | 8.255619049 | 274.136621 |
selu | 5 | 512 | 0.0003 | 8.497451782 | 275.3871346 |
selu | 3 | 512 | 0.0003 | 8.666195869 | 250.6216426 |
selu | 5 | 512 | 0.0005 | 8.668212891 | 135.9637468 |
selu | 4 | 512 | 0.0005 | 8.752684593 | 142.3942518 |
selu | 4 | 256 | 0.0005 | 8.815683365 | 149.5341535 |
selu | 6 | 512 | 0.0005 | 9.013358116 | 163.7959225 |
selu | 7 | 512 | 0.0005 | 9.042029381 | 187.1927898 |
selu | 3 | 512 | 0.0005 | 9.07631588 | 129.7901208 |
Parameters of EM-LSTMNN | Value |
---|---|
Numbers of hidden layers | 3 |
Number of neurons in hidden layers | 512 |
Activate function | selu |
Learning rate | 0.0005 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, Y.; Xinyang, S.; Wang, Q.; Fang, C. Enhancement of Electromagnetic Scattering Computation Acceleration Using LSTM Neural Networks. Electronics 2023, 12, 3900. https://doi.org/10.3390/electronics12183900
Yang Y, Xinyang S, Wang Q, Fang C. Enhancement of Electromagnetic Scattering Computation Acceleration Using LSTM Neural Networks. Electronics. 2023; 12(18):3900. https://doi.org/10.3390/electronics12183900
Chicago/Turabian StyleYang, Yuanpeng, Shi Xinyang, Qingyao Wang, and Chonghua Fang. 2023. "Enhancement of Electromagnetic Scattering Computation Acceleration Using LSTM Neural Networks" Electronics 12, no. 18: 3900. https://doi.org/10.3390/electronics12183900
APA StyleYang, Y., Xinyang, S., Wang, Q., & Fang, C. (2023). Enhancement of Electromagnetic Scattering Computation Acceleration Using LSTM Neural Networks. Electronics, 12(18), 3900. https://doi.org/10.3390/electronics12183900