Automatic Classification of Spectra with IEF-SCNN
Abstract
:1. Introduction
2. Data and Preprocessing
3. Method
3.1. EfficientNetV2
3.2. Optimization of the Attention Mechanism
3.3. 1DCNN
3.4. Overall Model Structure and Training Process
3.5. Optimization of Progressive Learning
4. Results
4.1. Dataset and Experimental Environment
4.2. Model Optimization
- 1.
- C: 0.1, 1, 10, 100, 1000, 10,000
- 2.
- : 1, 0.1, 0.01, 0.001, 0.0001
- 1.
- max_depth: 20, 30, 40
- 2.
- min_samples_leaf: 1, 2, 5
- 3.
- min_samples_split: 2, 5, 10
- 4.
- n_estimators: 300, 400, 500
- 1.
- activation function: relu, tanh, logistic
- 2.
- : 0.0001, 0.05
- 3.
- hidden_layer_sizes: (100,), (200,), (300,)
- 4.
- learning_rate: constant, adaptive
- 5.
- solver: adam, sgd
- 1.
- learning_rate: 0.01, 0.005, 0.001, 0.0005, 0.0001
- 2.
- dropout_rate: 0, 0.1, 0.3, 0.5
4.3. Experimental Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Model Parameter Tuning Process
Appendix A.1. Optimal Model Parameter Selection of SVM for Dataset-A
- 1.
- C: 0.1, 1, 10, 100, 1000, 10,000
- 2.
- : 1, 0.1, 0.01, 0.001, 0.0001
C | Kernel | Accuracy (%) | B | Kernel | Accuracy (%) | ||
---|---|---|---|---|---|---|---|
0.1 | 1 | rbf | 35.22 | 100 | 1 | rbf | 63.89 |
0.1 | 0.1 | rbf | 83.48 | 100 | 0.1 | rbf | 90.76 |
0.1 | 0.01 | rbf | 82.67 | 100 | 0.01 | rbf | 90.93 |
0.1 | 0.001 | rbf | 78.69 | 100 | 0.001 | rbf | 90.73 |
0.1 | 0.0001 | rbf | 73.40 | 100 | 0.0001 | rbf | 88.45 |
1 | 1 | rbf | 62.01 | 1000 | 1 | rbf | 63.89 |
1 | 0.1 | rbf | 89.15 | 1000 | 0.1 | rbf | 90.76 |
1 | 0.01 | rbf | 87.55 | 1000 | 0.01 | rbf | 90.72 |
1 | 0.001 | rbf | 84.71 | 1000 | 0.001 | rbf | 89.62 |
1 | 0.0001 | rbf | 79.43 | 1000 | 0.0001 | rbf | 90.05 |
10 | 1 | rbf | 63.89 | 10,000 | 1 | rbf | 63.89 |
10 | 0.1 | rbf | 90.71 | 10,000 | 0.1 | rbf | 90.76 |
10 | 0.01 | rbf | 91.08 | 10,000 | 0.01 | rbf | 90.72 |
10 | 0.001 | rbf | 88.23 | 10,000 | 0.001 | rbf | 88.63 |
10 | 0.0001 | rbf | 85.65 | 10,000 | 0.0001 | rbf | 88.26 |
Appendix A.2. Optimal Model Parameter Selection of SVM for Dataset-B
- 1.
- C: 0.1, 1, 10, 100, 1000, 10,000
- 2.
- : 1, 0.1, 0.01, 0.001, 0.0001
C | Kernel | Accuracy (%) | B | Kernel | Accuracy (%) | ||
---|---|---|---|---|---|---|---|
0.1 | 1 | rbf | 24.72 | 100 | 1 | rbf | 42.96 |
0.1 | 0.1 | rbf | 50.51 | 100 | 0.1 | rbf | 74.88 |
0.1 | 0.01 | rbf | 51.43 | 100 | 0.01 | rbf | 80.78 |
0.1 | 0.001 | rbf | 39.38 | 100 | 0.001 | rbf | 83.02 |
0.1 | 0.0001 | rbf | 27.37 | 100 | 0.0001 | rbf | 76.48 |
1 | 1 | rbf | 40.87 | 1000 | 1 | rbf | 42.96 |
1 | 0.1 | rbf | 71.58 | 1000 | 0.1 | rbf | 74.86 |
1 | 0.01 | rbf | 72.04 | 1000 | 0.01 | rbf | 80.33 |
1 | 0.001 | rbf | 59.93 | 1000 | 0.001 | rbf | 82.23 |
1 | 0.0001 | rbf | 38.93 | 1000 | 0.0001 | rbf | 83.44 |
10 | 1 | rbf | 42.96 | 10,000 | 1 | rbf | 42.96 |
10 | 0.1 | rbf | 74.85 | 10,000 | 0.1 | rbf | 74.86 |
10 | 0.01 | rbf | 80.26 | 10,000 | 0.01 | rbf | 80.34 |
10 | 0.001 | rbf | 75.65 | 10,000 | 0.001 | rbf | 81.26 |
10 | 0.0001 | rbf | 60.98 | 10,000 | 0.0001 | rbf | 81.74 |
Appendix A.3. Optimal Model Parameter Selection of Random Forest for Dataset-A
- 1.
- max_depth: 20, 30, 40
- 2.
- min_samples_leaf: 1, 2, 5
- 3.
- min_samples_split: 2, 5, 10
- 4.
- n_estimators: 300, 400, 500
Max_Depth | Min_Samples_Leaf | Min_Samples_Split | n_Estimators | Accuracy (%) |
---|---|---|---|---|
20 | 1 | 2 | 300 | 88.71 |
20 | 1 | 2 | 400 | 88.62 |
… | … | … | … | … |
30 | 1 | 10 | 400 | 88.78 |
30 | 1 | 10 | 500 | 88.67 |
30 | 2 | 2 | 300 | 88.86 |
30 | 2 | 2 | 400 | 88.64 |
30 | 2 | 2 | 500 | 88.59 |
30 | 2 | 5 | 300 | 88.61 |
30 | 2 | 5 | 400 | 88.59 |
30 | 2 | 5 | 500 | 88.83 |
30 | 2 | 10 | 300 | 88.59 |
30 | 2 | 10 | 400 | 88.51 |
… | … | … | … | … |
40 | 1 | 2 | 300 | 88.70 |
40 | 1 | 2 | 400 | 88.79 |
40 | 1 | 2 | 500 | 88.79 |
40 | 5 | 10 | 300 | 88.47 |
40 | 5 | 10 | 400 | 88.59 |
40 | 5 | 10 | 500 | 88.51 |
Appendix A.4. Optimal Model Parameter Selection of Random Forest for Dataset-B
- 1.
- max_depth: 5, 10, 15, 20
- 2.
- min_samples_leaf: 1, 2, 5, 10
- 3.
- min_samples_split: 2, 10, 15, 100
- 4.
- n_estimators: 100, 200, 300, 400, 500
Max_Depth | Min_Samples_Leaf | Min_Samples_Split | n_Estimators | Accuracy (%) |
---|---|---|---|---|
5 | 1 | 2 | 100 | 47.84 |
5 | 1 | 2 | 200 | 48.13 |
… | … | … | … | … |
20 | 1 | 2 | 100 | 62.98 |
20 | 1 | 2 | 200 | 63.83 |
20 | 1 | 2 | 300 | 64.86 |
20 | 1 | 2 | 400 | 65.06 |
… | … | … | … | … |
20 | 10 | 15 | 500 | 63.74 |
20 | 10 | 100 | 100 | 59.16 |
20 | 10 | 100 | 200 | 59.64 |
20 | 10 | 100 | 300 | 59.52 |
20 | 10 | 100 | 400 | 59.63 |
20 | 10 | 100 | 500 | 59.73 |
Appendix A.5. Optimal Model Parameter Selection of ANN for Dataset-A
- 1.
- activation function: relu, tanh, logistic
- 2.
- : 0.0001, 0.05
- 3.
- hidden_layer_sizes: (100,), (200,), (300,)
- 4.
- learning_rate: constant, adaptive
- 5.
- solver: adam, sgd
Activation Function | Hidden_Layer_Sizes | Learning_Rate | Solver | Accuracy (%) | |
---|---|---|---|---|---|
relu | 0.0001 | (100,) | constant | adam | 91.39 |
relu | 0.0001 | (100,) | constant | sgd | 91.73 |
relu | 0.0001 | (100,) | adaptive | adam | 92.76 |
… | … | … | … | … | … |
relu | 0.05 | (300,) | adaptive | adam | 90.80 |
relu | 0.05 | (300,) | adaptive | sgd | 92.11 |
tanh | 0.0001 | (100,) | constant | adam | 92.05 |
tanh | 0.0001 | (100,) | constant | sgd | 91.03 |
tanh | 0.0001 | (100,) | adaptive | adam | 91.83 |
tanh | 0.0001 | (100,) | adaptive | sgd | 90.76 |
tanh | 0.0001 | (200,) | constant | adam | 92.25 |
tanh | 0.0001 | (200,) | constant | sgd | 90.49 |
… | … | … | … | … | … |
logistic | 0.05 | (200,) | adaptive | sgd | 84.46 |
logistic | 0.05 | (300,) | constant | adam | 88.73 |
logistic | 0.05 | (300,) | constant | sgd | 84.72 |
logistic | 0.05 | (300,) | adaptive | adam | 89.56 |
logistic | 0.05 | (300,) | adaptive | sgd | 84.82 |
Appendix A.6. Optimal Model Parameter Selection of ANN for Dataset-B
- 1.
- activation function: relu, tanh, logistic
- 2.
- : 0.0001, 0.05
- 3.
- hidden_layer_sizes: (100,), (200,), (300,)
- 4.
- learning_rate: constant, adaptive
- 5.
- solver: adam, sgd
Activation Function | Hidden_Layer_Sizes | Learning_Rate | Solver | Accuracy (%) | |
---|---|---|---|---|---|
relu | 0.0001 | (100,) | constant | adam | 81.48 |
relu | 0.0001 | (100,) | constant | sgd | 80.95 |
relu | 0.0001 | (100,) | adaptive | adam | 82.90 |
relu | 0.05 | (200,) | adaptive | sgd | 80.95 |
relu | 0.05 | (300,) | constant | adam | 79.24 |
relu | 0.05 | (300,) | constant | sgd | 80.78 |
relu | 0.05 | (300,) | adaptive | adam | 82.07 |
relu | 0.05 | (300,) | adaptive | sgd | 81.43 |
… | … | … | … | … | …. |
tanh | 0.0001 | (100,) | constant | sgd | 80.29 |
tanh | 0.0001 | (100,) | adaptive | adam | 80.75 |
tanh | 0.05 | (300,) | constant | sgd | 81.08 |
tanh | 0.05 | (300,) | adaptive | adam | 81.15 |
… | … | … | … | … | … |
logistic | 0.05 | (300,) | constant | adam | 78.41 |
logistic | 0.05 | (300,) | constant | sgd | 73.76 |
logistic | 0.05 | (300,) | adaptive | adam | 75.55 |
logistic | 0.05 | (300,) | adaptive | sgd | 73.60 |
Appendix A.7. Optimal Model Parameter Selection of 1D SSCNN for Dataset-A
- 1.
- learning_rate: 0.01, 0.005, 0.001, 0.0005, 0.0001
- 2.
- dropout_rate: 0, 0.1, 0.3, 0.5
Learning_Rate | Dropout_Rate | Accuracy (%) | Learning_Rate | Dropout_Rate | Accuracy (%) |
---|---|---|---|---|---|
0.01 | 0 | 20.00 | 0.01 | 0.3 | 20.00 |
0.005 | 0 | 42.38 | 0.005 | 0.3 | 89.42 |
0.001 | 0 | 85.53 | 0.001 | 0.3 | 92.52 |
0.0005 | 0 | 84.97 | 0.0005 | 0.3 | 93.04 |
0.0001 | 0 | 85.05 | 0.0001 | 0.3 | 92.26 |
0.01 | 0.1 | 20.00 | 0.01 | 0.5 | 20.00 |
0.005 | 0.1 | 89.38 | 0.005 | 0.5 | 88.30 |
0.001 | 0.1 | 92.46 | 0.001 | 0.5 | 92.38 |
0.0005 | 0.1 | 92.88 | 0.0005 | 0.5 | 92.54 |
0.0001 | 0.1 | 91.96 | 0.0001 | 0.5 | 91.76 |
Appendix A.8. Optimal Model Parameter Selection of 1D SSCNN for Dataset-B
- 1.
- learning_rate: 0.01, 0.005, 0.001, 0.0005, 0.0001
- 2.
- dropout_rate: 0, 0.1, 0.3, 0.5
Learning_Rate | Dropout_Rate | Accuracy (%) | Learning_Rate | Dropout_Rate | Accuracy (%) |
---|---|---|---|---|---|
0.01 | 0 | 20.00 | 0.01 | 0.3 | 20.00 |
0.005 | 0 | 42.38 | 0.005 | 0.3 | 57.88 |
0.001 | 0 | 85.53 | 0.001 | 0.3 | 84.45 |
0.0005 | 0 | 84.97 | 0.0005 | 0.3 | 85.60 |
0.0001 | 0 | 85.05 | 0.0001 | 0.3 | 85.70 |
0.01 | 0.1 | 20.00 | 0.01 | 0.5 | 20.00 |
0.005 | 0.1 | 29.38 | 0.005 | 0.5 | 53.08 |
0.001 | 0.1 | 85.28 | 0.001 | 0.5 | 84.68 |
0.0005 | 0.1 | 86.13 | 0.0005 | 0.5 | 85.63 |
0.0001 | 0.1 | 85.20 | 0.0001 | 0.5 | 85.90 |
1 | https://skyserver.sdss.org/CasJobs/, accessed on 7 November 2023. |
2 | https://matplotlib.org/stable/, accessed on 7 November 2023. |
References
- York, D.G.; Adelman, J.; John, E.; Anderson, J.; Anderson, S.F.; Annis, J.; Bahcall, N.A.; Bakken, J.A.; Barkhouser, R.; Bastian, S.; et al. The Sloan Digital Sky Survey: Technical Summary. Astron. J. 2000, 120, 1579–1587. [Google Scholar] [CrossRef]
- Cui, X.Q.; Zhao, Y.H.; Chu, Y.Q.; Li, G.P.; Li, Q.; Zhang, L.P.; Su, H.J.; Yao, Z.Q.; Wang, Y.N.; Xing, X.Z.; et al. The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST). Res. Astron. Astrophys. 2012, 12, 1197–1242. [Google Scholar] [CrossRef]
- Morgan, W.W.; Keenan, P.C. Spectral Classification. Annu. Rev. Astron. Astrophys. 1973, 11, 29–50. [Google Scholar] [CrossRef]
- Duan, F.Q.; Liu, R.; Guo, P.; Zhou, M.Q.; Wu, F.C. Automated spectral classification using template matching. Res. Astron. Astrophys. 2009, 9, 341. [Google Scholar] [CrossRef]
- Bolton, A.S.; Schlegel, D.J.; Éric, A.; Bailey, S.; Bhardwaj, V.; Brownstein, J.R.; Burles, S.; Chen, Y.M.; Dawson, K.; Eisenstein, D.J.; et al. Spectral classification and redshift measurement for the sdss-III baryon oscillation spectroscopic survey. Astron. J. 2012, 144, 144. [Google Scholar] [CrossRef]
- von Hippel, T.; Storrie-Lombardi, L.J.; Storrie-Lombardi, M.C.; Irwin, M.J. Automated classification of stellar spectra—I. Initial results with artificial neural networks. Mon. Not. R. Astron. Soc. 1994, 269, 97–104. [Google Scholar] [CrossRef]
- Singh, H.P.; Gulati, R.K.; Gupta, R. Stellar Spectral Classification using Principal Component Analysis and Artificial Neural Networks. Mon. Not. R. Astron. Soc. 1998, 295, 312–318. [Google Scholar] [CrossRef]
- Navarro, S.G.; Corradi, R.L.M.; Mampaso, A. Automatic spectral classification of stellar spectra with low signal-to-noise ratio using artificial neural networks. Astron. Astrophys. 2012, 538, A76. [Google Scholar] [CrossRef]
- Gray, R.O.; Corbally, C.J. An Expert Computer Program for Classifying Stars on the MK Spectral Classification System. Astron. J. 2014, 147, 80. [Google Scholar] [CrossRef]
- Li, X.R.; Lin, Y.T.; Qiu, K.B. Stellar spectral classification and feature evaluation based on a random forest. Res. Astron. Astrophys. 2019, 19, 111. [Google Scholar] [CrossRef]
- Kheirdastan, S.; Bazarghan, M. SDSS-DR12 bulk stellar spectral classification: Artificial neural networks approach. Astrophys. Space Sci. 2016, 361, 304. [Google Scholar] [CrossRef]
- Liu, Z.b.; Zhao, W.J. An unbalanced spectra classification method based on entropy. Astrophys. Space Sci. 2017, 362, 98. [Google Scholar] [CrossRef]
- Yang, H.; Zhou, L.; Cai, J.; Shi, C.; Yang, Y.; Zhao, X.; Duan, J.; Yin, X. Data mining techniques on astronomical spectra data – II. Classification analysis. Mon. Not. R. Astron. Soc. 2022, 518, 5904–5928. [Google Scholar] [CrossRef]
- Fabbro, S.; Venn, K.A.; O’Briain, T.; Bialek, S.; Kielty, C.L.; Jahandar, F.; Monty, S. An application of deep learning in the analysis of stellar spectra. Mon. Not. R. Astron. Soc. 2018, 475, 2978–2993. [Google Scholar] [CrossRef]
- Sharma, K.; Kembhavi, A.; Kembhavi, A.; Sivarani, T.; Abraham, S.; Vaghmare, K. Application of convolutional neural networks for stellar spectral classification. Mon. Not. R. Astron. Soc. 2020, 491, 2280–2300. [Google Scholar] [CrossRef]
- Liu, W.; Zhu, M.; Dai, C.; He, D.Y.; Yao, J.; Tian, H.F.; Wang, B.Y.; Wu, K.; Zhan, Y.; Chen, B.Q.; et al. Classification of large-scale stellar spectra based on deep convolutional neural network. Mon. Not. R. Astron. Soc. 2019, 483, 4774–4783. [Google Scholar] [CrossRef]
- Zou, Z.; Zhu, T.; Xu, L.; Luo, A.L. Celestial Spectra Classification Network Based on Residual and Attention Mechanisms. Publ. Astron. Soc. Pac. 2020, 132, 044503. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
- Yuan, L.; Chen, Y.; Wang, T.; Yu, W.; Shi, Y.; Jiang, Z.; Tay, F.E.H.; Feng, J.; Yan, S. Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 538–547. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. EfficientNetV2: Smaller Models and Faster Training. In Proceedings of the 38th International Conference on Machine Learning, Online, 18–24 July 2021; Meila, M., Zhang, T., Eds.; PMLR, Proceedings of Machine Learning Research: San Diego, CA, USA, 2021; Volume 139, pp. 10096–10106. [Google Scholar] [CrossRef]
- Mahdi, B. Automated classification of ELODIE stellar spectral library using probabilistic artificial neural networks. Bull. Astron. Soc. India 2008, 36, 1–54. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
- Jie, H.; Li, S.; Gang, S.; Albanie, S. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. arXiv 2017, arXiv:1708.02002. [Google Scholar]
- Hearst, M.; Dumais, S.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. SGDR: Stochastic Gradient Descent with Warm Restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
- Jin, J.; Fu, K.; Zhang, C. Traffic Sign Recognition with Hinge Loss Trained Convolutional Neural Networks. IEEE Trans. Intell. Transp. Syst. 2014, 15, 1991–2000. [Google Scholar] [CrossRef]
Dataset | Type | No. of Instances | Training (%) | Validation (%) | Test (%) |
---|---|---|---|---|---|
dataset-A | A | 5000 | 60 | 20 | 20 |
F | 5000 | 60 | 20 | 20 | |
G | 5000 | 60 | 20 | 20 | |
K | 5000 | 60 | 20 | 20 | |
M | 5000 | 60 | 20 | 20 | |
dataset-B | M0 | 4000 | 60 | 20 | 20 |
M1 | 4000 | 60 | 20 | 20 | |
M2 | 4000 | 60 | 20 | 20 | |
M3 | 4000 | 60 | 20 | 20 | |
M4 | 4000 | 60 | 20 | 20 |
Algorithm | Training Time | Accuracy (%) | F1-Score (%) | ||||
---|---|---|---|---|---|---|---|
A | F | G | K | M | |||
SVM | 19 s | 91.78 | 95.761 | 78.781 | 86.810 | 97.451 | 99.299 |
Random forest | 1 min 12 s | 90.10 | 93.147 | 77.472 | 86.838 | 94.998 | 97.295 |
ANN | 1 min 14 s | 93.62 | 95.470 | 86.004 | 91.576 | 96.604 | 98.323 |
1D SSCNN | 2 h 23 min 36 s | 93.04 | 96.445 | 83.070 | 88.388 | 97.520 | 99.450 |
Our model (only image) | 10 h 18 min 20 s | 93.86 | 95.090 | 86.446 | 91.263 | 97.558 | 98.901 |
Our model (only spectra) | 2 h 11 min 2 s | 94.26 | 96.263 | 87.107 | 91.361 | 97.600 | 98.855 |
Our model | 10 h 34 min 52 s | 94.62 | 96.293 | 87.691 | 91.913 | 97.908 | 99.201 |
Our model (without preprocessing) | 10 h 31 min 43 s | 94.20 | 96.348 | 87.192 | 90.927 | 97.410 | 99.053 |
Algorithm | Training Time | Accuracy -(%) | F1-Score -(%) | ||||
---|---|---|---|---|---|---|---|
M0 | M1 | M2 | M3 | M4 | |||
SVM | 18 s | 83.15 | 85.941 | 79.156 | 79.332 | 82.658 | 88.761 |
Random forest | 2 min 54 s | 65.40 | 71.647 | 55.659 | 60.414 | 61.974 | 76.097 |
ANN | 2 min 8 s | 84.40 | 88.732 | 82.157 | 81.707 | 82.440 | 87.344 |
1D SSCNN | 1 h 44 min 22 s | 86.13 | 87.555 | 82.447 | 84.197 | 84.072 | 89.449 |
Our model (only image) | 8 h 33 min 24 s | 84.98 | 87.010 | 80.419 | 82.473 | 84.867 | 90.120 |
Our model (only spectra) | 1 h 49 min 22 s | 86.08 | 89.097 | 83.696 | 85.327 | 84.328 | 88.138 |
Our model | 8 h 37 min 21 s | 86.75 | 89.169 | 83.744 | 85.587 | 85.178 | 90.087 |
Our model (progressive learning) | 7 h 22 min 32 s | 87.38 | 89.497 | 84.045 | 85.732 | 86.582 | 90.932 |
Our model (without preprocessing) | 8 h 55 min 2 s | 85.30 | 88.239 | 82.491 | 84.000 | 83.963 | 87.796 |
Algorithm | Training Time | Accuracy (%) | F1-Score (%) | ||||
---|---|---|---|---|---|---|---|
M0 | M1 | M2 | M3 | M4 | |||
Our model (SE) | 5 h 47 min 21 s | 86.65 | 89.969 | 85.133 | 85.965 | 83.833 | 88.177 |
Our model (ECA) | 5 h 2 min 43 s | 85.18 | 88.384 | 82.571 | 83.505 | 83.544 | 87.635 |
Our model (CBAM) | 7 h 22 min 32 s | 87.38 | 89.497 | 84.045 | 85.732 | 86.582 | 90.932 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, J.; Zhang, Y.; Qu, M.; Jiang, B.; Wang, W. Automatic Classification of Spectra with IEF-SCNN. Universe 2023, 9, 477. https://doi.org/10.3390/universe9110477
Wu J, Zhang Y, Qu M, Jiang B, Wang W. Automatic Classification of Spectra with IEF-SCNN. Universe. 2023; 9(11):477. https://doi.org/10.3390/universe9110477
Chicago/Turabian StyleWu, Jingjing, Yanxia Zhang, Meixia Qu, Bin Jiang, and Wenyu Wang. 2023. "Automatic Classification of Spectra with IEF-SCNN" Universe 9, no. 11: 477. https://doi.org/10.3390/universe9110477
APA StyleWu, J., Zhang, Y., Qu, M., Jiang, B., & Wang, W. (2023). Automatic Classification of Spectra with IEF-SCNN. Universe, 9(11), 477. https://doi.org/10.3390/universe9110477