A Novel Convolutional Neural Net Architecture Based on Incorporating Meteorological Variable Inputs into Ultra-Short-Term Photovoltaic Power Forecasting
Abstract
:1. Introduction
1.1. Motivation
1.2. Related Works
1.3. The Research Work in This Paper
- The DI_ACNN model is proposed. A deep learning network is designed to improve the forecasting accuracy by innovatively considering three aspects simultaneously: the data source, algorithm operation mechanism, and information processing capability.
- The PV input data innovatively are split into two sets (a historical power dataset and a mixed historical power and meteorological variable dataset), which are deeply fused with the subsequent two sets of dual-scale convolution to provide finer input features for the model.
- A dual-scale convolutional cascade double-head AM structure is innovatively proposed. It fully utilizes the CNN’s spatio-temporal cross-feature extraction capability and focuses on the important features while extracting the long-time and short-time dependencies in the sequences, thus enhancing the feature learning capability of the CNN.
- The proposed method is validated using a case study based on three real PV generation datasets with different meteorological conditions in China.
2. Materials and Methods
2.1. Data Collection and Preprocessing
2.2. The One-Dimensional CNN Structure and Feature Extraction Mechanism
2.3. The Attention Mechanism
2.4. The Framework and Functions of the Proposed Model
2.5. Evaluation Indexes
3. Experiments and Results
3.1. Experimental Settings
3.2. Results and Discussion
4. Conclusions
- To demonstrate the effectiveness of the dual-input pattern proposed in this paper, input branch ablation experiments were conducted on data from three plants. The experimental results show that the proposed model in this paper obtains a higher forecasting accuracy compared with the traditional single-input mode, obtaining a maximum MAE percentage boost of 35.74% for plant 3 and a maximum RMSE percentage boost of 14.3% for plant 2.
- To verify the feature extraction capability of the proposed dual-scale convolutional structure, a dual-scale convolutional structure ablation experiment was conducted. The experimental results show that the forecasting accuracy of the proposed model is higher than that of the model with a single-scale convolutional structure on the data from all three plants. The maximum percentage boost in the MAE and the maximum percentage boost in the RMSE are obtained for plant 2, which are 31.69% and 16.67%, respectively.
- To verify the enhancement of the network’s learning ability by the two-headed AM proposed in this paper, we conducted an AM ablation experiment. The results show that the forecasting accuracy of the proposed model is higher than that of the model without the attention structure on the data of the three plants. A maximum MAE percentage boost of 8.49% is obtained on plant 1, and a maximum RMSE percentage boost of 3.8% is obtained on plant 2.
- To demonstrate the superiority of the performance of the proposed model, the forecasting results were compared with the baseline models (CNN and CNN_LSTM) with traditional input patterns. The experimental results show that the forecasting accuracy of the proposed model is higher than that of the respective baseline models. The maximum percentage boost in the MAE and the maximum percentage boost in the RMSE are both obtained on plant 2 with 40.33% and 23.02%, respectively.
- The effect of PV plant data with different meteorological conditions on the performance of the proposed model was investigated. The experimental results show that the proposed model has a slight difference in performance on the data of the three plants, showing good applicability.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Houran, M.A.; Bukhari, S.M.S.; Zafar, M.H.; Mansoor, M.; Chen, W. COA-CNN-LSTM: Coati optimization algorithm-based hybrid deep learning model for PV/wind power forecasting in smart grid applications. Appl. Energy 2023, 349, 121638. [Google Scholar] [CrossRef]
- International Renewable Energy Agency. Renewable Energy Capacity Statistics. 2022. Available online: https://www.irena.org/publications/2022/Apr/Renewable-Capacity-Statistics-2022 (accessed on 5 February 2024).
- Li, G.; Ding, C.; Zhao, N.; Wei, J.; Guo, Y.; Meng, C.; Huang, K.; Zhu, R. Research on a novel photovoltaic power forecasting model based on parallel long and short-term time series network. Energy 2024, 293, 130621. [Google Scholar] [CrossRef]
- Raza, M.Q.; Nadarajah, M.; Ekanayake, C. On recent advances in PV output power forecast. Sol. Energy 2016, 136, 125–144. [Google Scholar] [CrossRef]
- Jiang, J.; Lv, Q.; Gao, X. The Ultra-Short-Term Forecasting of Global Horizonal Irradiance Based on Total Sky Images. Remote Sens. 2020, 12, 3671. [Google Scholar] [CrossRef]
- Alonso-Montesinos, J.; Batlles, F. Solar radiation forecasting in the short- and medium-term under all sky conditions. Energy 2015, 83, 387–393. [Google Scholar] [CrossRef]
- David, M.; Ramahatana, F.; Trombe, P.; Lauret, P. Probabilistic forecasting of the solar irradiance with recursive ARMA and GARCH models. Sol. Energy 2016, 133, 55–72. [Google Scholar] [CrossRef]
- Dolara, A.; Leva, S.; Manzolini, G. Comparison of different physical models for PV power output prediction. Sol. Energy 2015, 119, 83–99. [Google Scholar] [CrossRef]
- Wang, H.; Liu, Y.; Zhou, B.; Li, C.; Cao, G.; Voropai, N.; Barakhtenko, E. Taxonomy research of artificial intelligence for deterministic solar power forecasting. Energy Convers. Manag. 2020, 214, 112909. [Google Scholar] [CrossRef]
- Jebli, I.; Belouadha, F.-Z.; Kabbaj, M.I.; Tilioua, A. Prediction of solar energy guided by pearson correlation using machine learning. Energy 2021, 224, 120109. [Google Scholar] [CrossRef]
- Wang, J.; Li, P.; Ran, R.; Che, Y.; Zhou, Y. A Short-Term Photovoltaic Power Prediction Model Based on the Gradient Boost Decision Tree. Appl. Sci. 2018, 8, 689. [Google Scholar] [CrossRef]
- Zhou, Y.; Zhou, N.; Gong, L.; Jiang, M. Prediction of photovoltaic power output based on similar day analysis, genetic algorithm and extreme learning machine. Energy 2020, 204, 117894. [Google Scholar] [CrossRef]
- Ma, H.; Zhang, C.; Peng, T.; Nazir, M.S.; Li, Y. An integrated framework of gated recurrent unit based on improved sine cosine algorithm for photovoltaic power forecasting. Energy 2022, 256, 124650. [Google Scholar] [CrossRef]
- Peng, X.; Deng, D.; Cheng, S.; Zhan, J.; Huang, J.; Niu, L. Study of the key technologies of electric power big data and its application prospects in smart grid. Chin. Soc. Electr. Eng. 2015, 35, 503–511. [Google Scholar] [CrossRef]
- Huang, X.; Li, Q.; Tai, Y.; Chen, Z.; Liu, J.; Shi, J.; Liu, W. Time series forecasting for hourly photovoltaic power using conditional generative adversarial network and Bi-LSTM. Energy 2022, 246, 123403. [Google Scholar] [CrossRef]
- Vu, B.H.; Chung, I.-Y. Optimal generation scheduling and operating reserve management for PV generation using RNN-based forecasting models for stand-alone microgrids. Renew. Energy 2022, 195, 1137–1154. [Google Scholar] [CrossRef]
- Ren, X.; Zhang, F.; Zhu, H.; Liu, Y. Quad-kernel deep convolutional neural network for intra-hour photo-voltaic power forecasting. Appl. Energy 2022, 323, 119682. [Google Scholar] [CrossRef]
- Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2014, arXiv:1312.6114. [Google Scholar] [CrossRef]
- Zhang, J.; Ling, C.; Li, S. EMG Signals based Human Action Recognition via Deep Belief Networks. IFAC Pap. Online 2019, 52, 271–276. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the International Conference on Computer Vision, Las Condes, Chile, 11–18 December 2015; pp. 1026–1034. [Google Scholar]
- Obiora, C.N.; Hasan, A.N.; Ali, A. Predicting Solar Irradiance at Several Time Horizons Using Machine Learning Algorithms. Sustainability 2023, 15, 8927. [Google Scholar] [CrossRef]
- Li, P.; Zhou, K.; Lu, X.; Yang, S. A hybrid deep learning model for short-term PV power forecasting. Appl. Energy 2020, 259, 114216. [Google Scholar] [CrossRef]
- Gaoa, M.; Lia, J.; Honga, F.; Longb, D. Day-ahead power forecasting in a largescale photovoltaic plant based on weather classification using LSTM. Energy 2019, 187, 115838. [Google Scholar] [CrossRef]
- Zhang, C.; Peng, T.; Nazir, M.S. A novel integrated photovoltaic power forecasting model based on variational mode decomposition and CNN-BiGRU considering meteorological variables. Electr. Power Syst. Res. 2022, 213, 108796. [Google Scholar] [CrossRef]
- Wang, R.; Li, C.; Fu, W.; Tang, G. Deep Learning Method Based on Gated Recurrent Unit and Variational Mode Decomposition for Short-Term Wind Power Interval Prediction. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 3814–3827. [Google Scholar] [CrossRef] [PubMed]
- Díaz–Vico, D.; Torres–Barrán, A.; Omari, A.; Dorronsoro, J.R. Deep Neural Networks for Wind and Solar Energy Prediction. Neural Process. Lett. 2017, 46, 829–844. [Google Scholar] [CrossRef]
- Zang, H.; Cheng, L.; Ding, T.; Cheung, K.W.; Wei, Z.; Sun, G. Day-ahead photovoltaic power forecasting approach based on deep convolutional neural networks and meta learning. Int. J. Electr. Power Energy Syst. 2020, 118, 105790. [Google Scholar] [CrossRef]
- Wang, K.; Qi, X.; Liu, H. A comparison of day-ahead photovoltaic power forecasting models based on deep learning neural network. Appl. Energy 2019, 251, 113315. [Google Scholar] [CrossRef]
- Gu, B.; Li, X.; Xu, F.; Yang, X.; Wang, F.; Wang, P. Forecasting and Uncertainty Analysis of Day-Ahead Photovoltaic Power Based on WT-CNN-BiLSTM-AM-GMM. Sustainability 2023, 15, 6538. [Google Scholar] [CrossRef]
- Agga, A.; Abbou, A.; Labbadi, M.; El Houm, Y. Short-term self consumption PV plant power production forecasts based on hybrid CNN-LSTM, ConvLSTM models. Renew. Energy 2021, 177, 101–112. [Google Scholar] [CrossRef]
- Kim, T.-Y.; Cho, S.-B. Web traffic anomaly detection using C-LSTM neural networks. Expert Syst. Appl. 2018, 106, 66–76. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.; Kaiser, Ł.; Polosukhin, L. Attention is all you need. Neural Inf. Syst. 2017, 30, 5998–6008. [Google Scholar] [CrossRef]
- Bai, M.; Chen, Y.; Zhao, X.; Liu, J.; Yu, D. Deep attention ConvLSTM-based adaptive fusion of clear-sky physical prior knowledge and multivariable historical information for probabilistic prediction of photovoltaic power. Expert Syst. Appl. 2022, 202, 117335. [Google Scholar] [CrossRef]
- Qu, J.; Qian, Z.; Pei, Y. Day-ahead hourly photovoltaic power forecasting using attention-based CNN-LSTM neural network embedded with multiple relevant and target variables prediction pattern. Energy 2021, 232, 120996. [Google Scholar] [CrossRef]
- Oh, S.L.; Ng, E.Y.; San Tan, R.; Acharya, U.R. Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats. Comput. Biol. Med. 2018, 102, 278–287. [Google Scholar] [CrossRef] [PubMed]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
The Layers for DI_ACNNs | Parameters of Each Layer |
---|---|
Conv1D 11 | Filters: 64, kernel size: 2, stride = 1, activation: ‘relu’, padding: ‘same’ |
Global_MaxPooling1D 11 | - |
Dropout 11 | Rate = 0.1 |
Conv1D 12 | Filters: 64, kernel size: 16, stride: 1, activation: ‘relu’, padding: ‘same’ |
GlobalMaxPooling1D 12 | - |
Dropout 12 | Rate = 0.1 |
Attention module 1 (Head1) | Neurons: 64, activation: ‘relu’, kernel_regularizer: l2 (0.0001) |
Attention module 1 (Head2) | Neurons: 64, activation: ‘relu’, kernel_regularizer: l2 (0.0001) |
Multiply 1 | - |
RepeatVector 1 | 8 |
Conv1D_21 | Filters: 64, kernel size: 2, stride: 1, activation: ‘relu’, padding: ‘same’ |
GlobalMaxPooling1D 22 | - |
Conv1D 22 | Filters: 64, kernel size: 16, stride: 1, activation: ‘relu’, padding: ‘same’ |
GlobalMaxPooling1D 22 | - |
Attention module 2 (Head1) | Neurons: 64, activation: ‘relu’, kernel_regularizer: l2 (0.0001) |
Attention module 2 (Head2) | Neurons: 64, activation: ‘relu’, kernel_regularizer: l2 (0.0001) |
Multiply 2 | - |
RepeatVector 2 | 8 |
Concatenate 3 | - |
Conv1D 3 | Filters: 64, kernel size: 4, stride: 1, activation: ‘relu’, padding: ‘same’ |
MaxPooling1D 3 | Pool kernel size: 2, stride: 2 |
flatten | - |
Fully connected | Neurons: 64, activation: ‘relu’, kernel_regularizer: l2 (0.0001) |
Output | Power (t + 1) |
Others | Epochs: 200; EarlyStopping:monitor: ‘mse’; Batch-size: 32 (plant1), 32 (plant2), 48 (plant3); patience: 5; min_delta: 0.0001 |
Model | Plant 1 (50 MW) | Plant 2 (20 MW) | Plant 3 (35 MW) | |||
---|---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
DI_ACNNs | 0.668 | 1.782 | 0.250 | 0.699 | 0.392 | 1.063 |
Input 1_CNN | 0.834 | 1.989 | 0.319 | 0.814 | 0.486 | 1.079 |
Input 2_CNN | 0.878 | 1.970 | 0.365 | 0.797 | 0.610 | 1.230 |
Model | Plant 1 (50 MW) | Plant 2 (20 MW) | Plant 3 (35 MW) | |||
---|---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
DI_ACNNs | 0.668 | 1.782 | 0.250 | 0.699 | 0.392 | 1.063 |
DI_ACNN (kernel 1) | 0.758 | 1.868 | 0.366 | 0.806 | 0.454 | 1.099 |
DI_ACNN (kernel 2) | 0.717 | 1.823 | 0.300 | 0.740 | 0.398 | 1.080 |
Model | Plant 1 (50 MW) | Plant 2 (20 MW) | Plant 3 (35 MW) | |||
---|---|---|---|---|---|---|
MAE | RMSE | MAE | RMSE | MAE | RMSE | |
DI_ACNNs | 0.668 | 1.782 | 0.250 | 0.699 | 0.392 | 1.063 |
DI_CNNs | 0.730 | 1.780 | 0.261 | 0.706 | 0.407 | 1.075 |
Model | Plant 1 (50 MW) | Plant 2 (20 MW) | Plant 3 (35 MW) | ||||||
---|---|---|---|---|---|---|---|---|---|
MAE | RMSE | R2 | MAE | RMSE | R2 | MAE | RMSE | R2 | |
DI_ACNNs | 0.668 | 1.782 | 0.98 | 0.250 | 0.699 | 0.98 | 0.392 | 1.063 | 0.99 |
Multivar_CLSTM [30] | 0.853 | 1.844 | 0.98 | 0.274 | 0.738 | 0.98 | 0.542 | 1.160 | 0.99 |
Univar_CLSTM | 0.734 | 1.854 | 0.98 | 0.419 | 0.908 | 0.98 | 0.487 | 1.121 | 0.99 |
Multivar_CNN [30] | 0.858 | 1.818 | 0.98 | 0.284 | 0.742 | 0.98 | 0.481 | 1.122 | 0.99 |
Univar_CNN | 0.776 | 1.881 | 0.98 | 0.397 | 0.895 | 0.98 | 0.500 | 1.124 | 0.99 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ren, X.; Zhang, F.; Yan, J.; Liu, Y. A Novel Convolutional Neural Net Architecture Based on Incorporating Meteorological Variable Inputs into Ultra-Short-Term Photovoltaic Power Forecasting. Sustainability 2024, 16, 2786. https://doi.org/10.3390/su16072786
Ren X, Zhang F, Yan J, Liu Y. A Novel Convolutional Neural Net Architecture Based on Incorporating Meteorological Variable Inputs into Ultra-Short-Term Photovoltaic Power Forecasting. Sustainability. 2024; 16(7):2786. https://doi.org/10.3390/su16072786
Chicago/Turabian StyleRen, Xiaoying, Fei Zhang, Junshuai Yan, and Yongqian Liu. 2024. "A Novel Convolutional Neural Net Architecture Based on Incorporating Meteorological Variable Inputs into Ultra-Short-Term Photovoltaic Power Forecasting" Sustainability 16, no. 7: 2786. https://doi.org/10.3390/su16072786
APA StyleRen, X., Zhang, F., Yan, J., & Liu, Y. (2024). A Novel Convolutional Neural Net Architecture Based on Incorporating Meteorological Variable Inputs into Ultra-Short-Term Photovoltaic Power Forecasting. Sustainability, 16(7), 2786. https://doi.org/10.3390/su16072786