# Performance Analysis of Classification and Detection for PV Panel Motion Blur Images Based on Deblurring and Deep Learning Techniques

^{1}

^{2}

^{3}

^{4}

^{5}

^{*}

## Abstract

**:**

## 1. Introduction

- The study examined the climate challenges of snow-covered solar panels and how to overcome them, as well as the urgent need to apply artificial intelligence methods, such as classification and detection, to deal with these challenges.
- We provided comprehensive experiments and analysis on the roles of upsampling images to solve the issue of insufficient availability of data samples and to significantly enhance the performance of the deep learning models. In this study, the upsampling technique is used for solar panel images.
- We proposed a BIDA-CNN model for images of snow-covered solar panel surface detection and classification.
- The BIDA-CNN-based model for the photovoltaic module classification approach is extensively assessed and validated through a series of experiments using the existing state-of-the-art deep-learning-based solutions such as VGG-16, VGG-19, RESNET-18, RESNET-50, and RESNET-101 as the comparative benchmarks. Moreover, the performance of the different models is examined based on different metrics through a comparative study.
- The paper not only validates the methods based on real-world data available in the synthetic dataset but also has great success in conditions of extreme climate that can be simulated by generating motion blur.
- Finally, the RCNN model based on a series network backbone is applied to detect the solar panel modules and recognize whether they are snow-covered or not.

## 2. Materials and Methods

#### 2.1. Case Study

#### 2.2. Irradiation Per Square Meter

#### 2.3. Dataset

#### 2.4. The Models and Features Extraction

_{n}are the depth radius and feature map number. The following Equation is used to calculate the max-pooling layer:

#### 2.5. Overall Proposed Model and Experimental Settings

^{®}Core™ i7-9750H CPU @ 2.60GHz and an NVIDIA GeForce RTX 2060 on MATLAB package. RCNN is based on a series network, the ImageNet package provides a variety of pretrained advanced models and is trained on the ImageNet dataset; two of these models are VGG-16 and VGG-19, which were used in this experiment and can accelerate the learning of the network. Furthermore, the above two models are compared with the proposed model. The max epochs are fixed to 80 with the Mini batch size set to 2 due to the GPU memory limitation. The number of iterations per epoch = number of data samples/Mini batch size, which can ensure that the solar panel images training data are looped through completely. The learning rate is set to 0.0001 and the class number is set to 3.

#### 2.6. Evaluation Metrics

## 3. Results and Discussions

#### 3.1. Classification Performance Results

^{−8}. The learning rate (LR) for all the architectures is 0.0003. The average of the lowest validation loss for all iterations $({\mathrm{v}}^{\mathrm{loss}}$ mean) is 0.01865, which was achieved by the BIDA-CNN model.

^{−8}, obtained by VGG-19, whereas the loss validation (${\mathrm{v}}^{\mathrm{loss}}$) obtained from VGG-19 is 2.0662 × 10

^{−6}. The learning rate (LR) for all the architectures is 0.0003. Based on ${\mathrm{v}}^{\mathrm{loss}}$ mean, the best model in this experiment is the BIDA-CNN model, achieving 0.03797. In addition, all models used in this experiment recorded 100% overall precision, sensitivity, F1-score, and accuracy in the testing. Achieving high accuracy does not mean that the model probability is 100%; the probability ranges from 50% to 99%. We always work on proposing models that increase the probability of being correct through a mathematical model that improves the process of extracting features with important information that is input into the decision layer.

#### 3.2. Detection Performance Results

## 4. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional Neural Networks: An Overview and Application in Radiology. Insights Imaging
**2018**, 9, 611–629. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions. J. Big Data
**2021**, 8, 53. [Google Scholar] [CrossRef] [PubMed] - Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Sangeetha, V.; Prasad, K.J.R. Syntheses of Novel Derivatives of 2-Acetylfuro[2,3-a]Carbazoles, Benzo[1,2-b]-1,4-Thiazepino[2,3-a]Carbazoles and 1-Acetyloxycarbazole-2-Carbaldehydes. ChemInform
**2006**, 37. [Google Scholar] [CrossRef] - Ahsan, M.M.; Ahad, M.T.; Soma, F.A.; Paul, S.; Chowdhury, A.; Luna, S.A.; Yazdan, M.M.S.; Rahman, A.; Siddique, Z.; Huebner, P. Detecting SARS-CoV-2 From Chest X-Ray Using Artificial Intelligence. IEEE Access
**2021**, 9, 35501–35513. [Google Scholar] [CrossRef] [PubMed] - Xu, X.; Jiang, X.; Ma, C.; Du, P.; Li, X.; Lv, S.; Yu, L.; Ni, Q.; Chen, Y.; Su, J.; et al. A Deep Learning System to Screen Novel Coronavirus Disease 2019 Pneumonia. Engineering
**2020**, 6, 1122–1129. [Google Scholar] [CrossRef] [PubMed] - Kumari, P.; Toshniwal, D. Real-Time Estimation of COVID-19 Cases Using Machine Learning and Mathematical Models—The Case of India. In Proceedings of the 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), Rupnagar, India, 26–28 November 2020; pp. 369–374. [Google Scholar]
- Ishengoma, F.S.; Rai, I.A.; Said, R.N. Identification of Maize Leaves Infected by Fall Armyworms Using UAV-Based Imagery and Convolutional Neural Networks. Comput. Electron. Agric.
**2021**, 184, 106124. [Google Scholar] [CrossRef] - Zhu, H.; Yang, L.; Fei, J.; Zhao, L.; Han, Z. Recognition of Carrot Appearance Quality Based on Deep Feature and Support Vector Machine. Comput. Electron. Agric.
**2021**, 186, 106185. [Google Scholar] [CrossRef] - Hashemi, B.; Cretu, A.-M.; Taheri, S. Snow Loss Prediction for Photovoltaic Farms Using Computational Intelligence Techniques. IEEE J. Photovolt.
**2020**, 10, 1044–1052. [Google Scholar] [CrossRef] - Solangi, K.H.; Islam, M.R.; Saidur, R.; Rahim, N.A.; Fayaz, H. A Review on Global Solar Energy Policy. Renew. Sustain. Energy Rev.
**2011**, 15, 2149–2163. [Google Scholar] [CrossRef] - Marion, B.; Schaefer, R.; Caine, H.; Sanchez, G. Measured and Modeled Photovoltaic System Energy Losses from Snow for Colorado and Wisconsin Locations. Sol. Energy
**2013**, 97, 112–121. [Google Scholar] [CrossRef] - Pawluk, R.E.; Chen, Y.; She, Y. Photovoltaic Electricity Generation Loss Due to Snow—A Literature Review on Influence Factors, Estimation, and Mitigation. Renew. Sustain. Energy Rev.
**2019**, 107, 171–182. [Google Scholar] [CrossRef] - Andrews, R.W.; Pollard, A.; Pearce, J.M. The Effects of Snowfall on Solar Photovoltaic Performance. Sol. Energy
**2013**, 92, 84–97. [Google Scholar] [CrossRef] [Green Version] - Andrews, R.W.; Pearce, J.M. Prediction of Energy Effects on Photovoltaic Systems Due to Snowfall Events. In Proceedings of the 2012 38th IEEE Photovoltaic Specialists Conference, Austin, TX, USA, 3–8 June 2012; pp. 003386–003391. [Google Scholar]
- Hosseini, S.; Taheri, S.; Farzaneh, M.; Taheri, H. Modeling of Snow-Covered Photovoltaic Modules. IEEE Trans. Ind. Electron.
**2018**, 65, 7975–7983. [Google Scholar] [CrossRef] - Hayibo, K.S.; Petsiuk, A.; Mayville, P.; Brown, L.; Pearce, J.M. Monofacial vs Bifacial Solar Photovoltaic Systems in Snowy Environments. Renew. Energy
**2022**, 193, 657–668. [Google Scholar] [CrossRef] - Sharma, V.; Chandel, S.S. Performance and Degradation Analysis for Long Term Reliability of Solar Photovoltaic Systems: A Review. Renew. Sustain. Energy Rev.
**2013**, 27, 753–767. [Google Scholar] [CrossRef] - Tsanakas, J.A.; Vannier, G.; Plissonnier, A.; Ha, D.L.; Barruel, F. Fault Diagnosis and Classification of Large-Scale Photovoltaic Plants through Aerial Orthophoto Thermal Mapping. In Proceedings of the 31st European Photovoltaic Solar Energy Conference and Exhibition, Hamburg, Germany, 14–18 September 2015; pp. 1783–1788. [Google Scholar] [CrossRef]
- Eder, G.; Voronko, Y.; Hirschl, C.; Ebner, R.; Újvári, G.; Mühleisen, W. Non-Destructive Failure Detection and Visualization of Artificially and Naturally Aged PV Modules. Energies
**2018**, 11, 1053. [Google Scholar] [CrossRef] [Green Version] - Grimaccia, F.; Leva, S.; Dolara, A.; Aghaei, M. Survey on PV Modules’ Common Faults After an O&M Flight Extensive Campaign Over Different Plants in Italy. IEEE J. Photovolt.
**2017**, 7, 810–816. [Google Scholar] [CrossRef] [Green Version] - Liao, K.-C.; Lu, J.-H. Using UAV to Detect Solar Module Fault Conditions of a Solar Power Farm with IR and Visual Image Analysis. Appl. Sci.
**2021**, 11, 1835. [Google Scholar] [CrossRef] - Starzyński, J.; Zawadzki, P.; Harańczyk, D. Machine Learning in Solar Plants Inspection Automation. Energies
**2022**, 15, 5966. [Google Scholar] [CrossRef] - Supe, H.; Avtar, R.; Singh, D.; Gupta, A.; Yunus, A.P.; Dou, J.A.; Ravankar, A.; Mohan, G.; Chapagain, S.K.; Sharma, V.; et al. Google Earth Engine for the Detection of Soiling on Photovoltaic Solar Panels in Arid Environments. Remote Sens.
**2020**, 12, 1466. [Google Scholar] [CrossRef] - Deitsch, S.; Christlein, V.; Berger, S.; Buerhop-Lutz, C.; Maier, A.; Gallwitz, F.; Riess, C. Automatic Classification of Defective Photovoltaic Module Cells in Electroluminescence Images. Sol. Energy
**2019**, 185, 455–468. [Google Scholar] [CrossRef] [Green Version] - Su, B.; Chen, H.; Zhu, Y.; Liu, W.; Liu, K. Classification of Manufacturing Defects in Multicrystalline Solar Cells With Novel Feature Descriptor. IEEE Trans. Instrum. Meas.
**2019**, 68, 4675–4688. [Google Scholar] [CrossRef] - Chen, H.; Hu, Q.; Zhai, B.; Chen, H.; Liu, K. A Robust Weakly Supervised Learning of Deep Conv-Nets for Surface Defect Inspection. Neural Comput. Applic
**2020**, 32, 11229–11244. [Google Scholar] [CrossRef] - Fioresi, J.; Colvin, D.J.; Frota, R.; Gupta, R.; Li, M.; Seigneur, H.P.; Vyas, S.; Oliveira, S.; Shah, M.; Davis, K.O. Automated Defect Detection and Localization in Photovoltaic Cells Using Semantic Segmentation of Electroluminescence Images. IEEE J. Photovolt.
**2022**, 12, 53–61. [Google Scholar] [CrossRef] - Karabuk University Provides Geographic Data Sets. Available online: https://www.karabuk.edu.tr/en/ (accessed on 25 September 2022).
- Solar and Meteorological Data Sets from NASA. Available online: https://power.larc.nasa.gov/ (accessed on 25 September 2022).
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent Advances in Convolutional Neural Networks. Pattern Recognit.
**2018**, 77, 354–377. [Google Scholar] [CrossRef] [Green Version] - Rasheed, J.; Hameed, A.A.; Djeddi, C.; Jamil, A.; Al-Turjman, F. A Machine Learning-Based Framework for Diagnosis of COVID-19 from Chest X-Ray Images. Interdiscip. Sci. Comput. Life Sci.
**2021**, 13, 103–117. [Google Scholar] [CrossRef] - Zhao, X.; Wei, H.; Wang, H.; Zhu, T.; Zhang, K. 3D-CNN-Based Feature Extraction of Ground-Based Cloud Images for Direct Normal Irradiance Prediction. Sol. Energy
**2019**, 181, 510–518. [Google Scholar] [CrossRef] - Dong, J.; Pan, J.; Su, Z.; Yang, M.-H. Blind Image Deblurring with Outlier Handling. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2497–2505. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM
**2017**, 60, 84–90. [Google Scholar] [CrossRef] [Green Version] - Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]

**Figure 3.**(

**a**) The sum of watt hours per meter square for one day. (

**b**) Average monthly kilowatt-hours per square meter.

**Figure 4.**Representative samples of solar panel datasets: (

**a**) all snow, (

**b**) no snow, and (

**c**) partial.

**Figure 5.**A visual architecture representation of (

**A**) visual geometry group-16 (VGG-16) and (

**B**) VGG-19.

**Figure 6.**The visual architectural representation of skip connection for (

**A**) residual neural network-18 (RESNET-18), (

**B**) RESNET-50, and (

**C**) RESNET-101.

**Figure 7.**A visual architectural representation of (

**A**) residual neural network-18 (RESNET-18) with 71 layers, (

**B**) RESNET-50 with 177 layers, and (

**C**) RESNET-101 with 347 layers.

**Figure 8.**A visual architectural representation of our proposed blind image deblurring algorithm (BIDA) and CNN (BIDA-CNN).

**Figure 9.**The overall design of our proposed deep-learning-based model for solar panel image classification and detection: (

**a**) without preprocessing, (

**b**) with simulated motion blur and upsampling preprocessing, and (

**c**) RCNN for solar panel image detection.

**Figure 10.**First case comparison results of accuracy and losses for VGG-16, VGG-19, RESNET-18, RESNET-50, RESNET-101, and BIDA-CNN proposed based on (

**a**) training accuracy, (

**b**) validation accuracy, (

**c**) training losses, and (

**d**) validation losses.

**Figure 11.**(

**a**) Confusion matrix results of VGG-16, VGG-19, RESNET-18, RESNET-50, and RESNET-101 and our proposed BIDA-CNN; (

**b**) predicted distribution over classes of the compared models, namely VGG-16, VGG-19, RESNET-18, RESNET-50, and RESNET-101, as well as BIDA-CNN.

**Figure 12.**Second case image probabilities with a softmax function for (

**a**) VGG-16, (

**b**) VGG-19, (

**c**) RESNET-18, (

**d**) RESNET-50, and (

**e**) RESNET-101, as well as (

**f**) our proposed BIDA-CNN.

**Figure 13.**Second case comparison results of accuracy and losses for VGG-16, VGG-19, RESNET-18, RESNET-50, RESNET-101, and BIDA-CNN proposed based on (

**a**) training accuracy, (

**b**) validation accuracy, (

**c**) training losses, and (

**d**) validation losses.

**Figure 14.**(

**a**) Confusion matrix results of VGG-16, VGG-19, RESNET-18, RESNET-50, RESNET-101, and our proposed BIDA-CNN; (

**b**) predicted distribution over classes of the compared models VGG-16, VGG-19, RESNET-18, RESNET-50, RESNET-101, and our proposed BIDA-CNN.

**Figure 15.**Second case image probabilities with a softmax function for (

**a**) VGG-16, (

**b**) VGG-19, (

**c**) RESNET-18, (

**d**) RESNET-50, and (

**e**) RESNET-101, as well as the (

**f**) proposed BIDA-CNN.

Months | Rate | $\mathbf{k}\mathbf{W}\mathbf{h}/\mathbf{m}\xb2$ | $\mathbf{k}\mathbf{W}\mathbf{h}/\mathbf{k}\mathbf{w}\mathbf{p}$ |
---|---|---|---|

Jan. | 4.1% | 59.2 | 56.4 |

Feb. | 5.1% | 73.5 | 70 |

Mar. | 7% | 98 | 93 |

Nov. | 5.9% | 86 | 81.9 |

Dec. | 4% | 57.8 | 55 |

Datasets | Original Dataset | Upsampling Dataset | Motion Blur Dataset | ||
---|---|---|---|---|---|

Approaches | |||||

Classification | First case | Training | ✓ | ||

Validation | ✓ | ||||

Testing | ✓ | ||||

Second case | Training | ✓ | |||

Validation | ✓ | ||||

Testing | ✓ | ||||

Detection | Third case | Training | ✓ | ||

Validation | ✓ | ||||

Testing | ✓ |

Models | E | I | ${\mathbf{M}}^{\mathbf{a}\mathbf{c}\mathbf{c}}$ | ${\mathbf{V}}^{\mathbf{a}\mathbf{c}\mathbf{c}}$ | ${\mathbf{m}}^{\mathbf{l}\mathbf{o}\mathbf{s}\mathbf{s}}$ | ${\mathbf{v}}^{\mathbf{l}\mathbf{o}\mathbf{s}\mathbf{s}}$ | ${\mathbf{v}}^{\mathbf{l}\mathbf{o}\mathbf{s}\mathbf{s}}\mathbf{Mean}$ | $\mathbf{L}\mathbf{R}$ |
---|---|---|---|---|---|---|---|---|

VGG-16 | 26 | 598 | 100.00% | 100.00% | 5.9631 × ${10}^{-5}$ | 0.0001 | 0.03312 | 0.0003 |

27 | 621 | 100.00% | 100.00% | 5.1497 × ${10}^{-6}$ | 0.0001 | 0.0003 | ||

28 | 644 | 100.00% | 100.00% | 1.6808 × ${10}^{-6}$ | 8.8603 × ${10}^{-5}$ | 0.0003 | ||

29 | 667 | 100.00% | 100.00% | 1.1682 ×${\mathbf{10}}^{\mathbf{-}\mathrm{6}}$ | 8.7807 ×${\mathbf{10}}^{\mathbf{-}\mathbf{5}}$ | 0.0003 | ||

30 | 690 | 100.00% | 100.00% | 1.4209 × ${10}^{-5}$ | 8.8387 × ${10}^{-5}$ | 0.0003 | ||

VGG-19 | 26 | 598 | 100.00% | 100.00% | 0 | 1.4901 ×${\mathbf{10}}^{\mathbf{-}\mathrm{8}}$ | 0.02139 | 0.0003 |

27 | 621 | 100.00% | 100.00% | 7.6294 × ${10}^{-7}$ | 1.4901 × ${10}^{-8}$ | 0.0003 | ||

28 | 644 | 100.00% | 100.00% | 7.1526 × ${10}^{-8}$ | 1.4901 × ${10}^{-8}$ | 0.0003 | ||

29 | 667 | 100.00% | 100.00% | 5.9605 × ${10}^{-8}$ | 1.4901 × ${10}^{-8}$ | 0.0003 | ||

30 | 690 | 100.00% | 100.00% | 5.4836 × ${10}^{-7}$ | 1.4901 × ${10}^{-8}$ | 0.0003 | ||

RESNET-18 | 26 | 598 | 100.00% | 100.00% | 0.0006 | 0.0141 | 0.05048 | 0.0003 |

27 | 621 | 100.00% | 100.00% | 0.0008 | 0.0073 | 0.0003 | ||

28 | 644 | 100.00% | 100.00% | 0.0009 | 0.0101 | 0.0003 | ||

29 | 667 | 100.00% | 100.00% | 0.0007 | 0.0153 | 0.0003 | ||

30 | 690 | 100.00% | 100.00% | 0.0014 | 0.0099 | 0.0003 | ||

RESNET-50 | 26 | 598 | 100.00% | 100.00% | 0.0007 | 0.0038 | 0.02047 | 0.0003 |

27 | 621 | 100.00% | 100.00% | 0.0006 | 0.0067 | 0.0003 | ||

28 | 644 | 100.00% | 100.00% | 0.0003 | 0.0100 | 0.0003 | ||

29 | 667 | 100.00% | 100.00% | 0.0013 | 0.0049 | 0.0003 | ||

30 | 690 | 100.00% | 100.00% | 0.0002 | 0.0040 | 0.0003 | ||

RESNET-101 | 26 | 598 | 100.00% | 100.00% | 0.0001 | 0.0017 | 0.01996 | 0.0003 |

27 | 621 | 100.00% | 100.00% | 0.0003 | 0.0046 | 0.0003 | ||

28 | 644 | 100.00% | 100.00% | 0.0011 | 0.0061 | 0.0003 | ||

29 | 667 | 100.00% | 100.00% | 0.0010 | 0.0044 | 0.0003 | ||

30 | 690 | 100.00% | 98.75% | 0.0002 | 0.0117 | 0.0003 | ||

Proposed BIDA-CNN | 26 | 598 | 100.00% | 100.00% | 0.0003 | 0.0006 | 0.01865 | 0.0003 |

27 | 621 | 100.00% | 100.00% | 0.0011 | 0.0005 | 0.0003 | ||

28 | 644 | 100.00% | 100.00% | 0.0047 | 0.0005 | 0.0003 | ||

29 | 667 | 100.00% | 100.00% | 0.0016 | 0.0005 | 0.0003 | ||

30 | 690 | 100.00% | 100.00% | 9.7633 ×${\mathbf{10}}^{\mathbf{-}\mathbf{5}}$ | 0.0004 | 0.0003 |

Models | Classes | ${\mathbf{S}}_{\mathbf{e}}$ | ${\mathbf{P}}_{\mathbf{r}}$ | $\mathbf{F}1$ | $\mathbf{A}\mathbf{C}\mathbf{C}$ |
---|---|---|---|---|---|

All Models | all snow | 1 | 1 | 1 | 100% |

no snow | 1 | 1 | 1 | ||

partial | 1 | 1 | 1 |

Models | E | I | ${\mathbf{M}}^{\mathbf{a}\mathbf{c}\mathbf{c}}$ | ${\mathbf{V}}^{\mathbf{a}\mathbf{c}\mathbf{c}}$ | ${\mathbf{m}}^{\mathbf{l}\mathbf{o}\mathbf{s}\mathbf{s}}$ | ${\mathbf{v}}^{\mathbf{l}\mathbf{o}\mathbf{s}\mathbf{s}}$ | ${\mathbf{v}}^{\mathbf{l}\mathbf{o}\mathbf{s}\mathbf{s}}\mathrm{Mean}$ | $\mathbf{L}\mathbf{R}$ |
---|---|---|---|---|---|---|---|---|

VGG-16 | 26 | 702 | 100.00% | 98.89% | 2.8537 × ${\mathbf{10}}^{\mathbf{-}\mathbf{5}}$ | 0.0420 | 0.07938 | 0.0003 |

27 | 729 | 100.00% | 96.67% | 0.0006 | 0.2663 | 0.0003 | ||

28 | 756 | 100.00% | 100.00% | 1.0395 × ${\mathbf{10}}^{\mathbf{-}\mathbf{5}}$ | 1.5562 × ${\mathbf{10}}^{\mathbf{-}\mathbf{5}}$ | 0.0003 | ||

29 | 783 | 100.00% | 100.00% | 0.0002 | 0.0017 | 0.0003 | ||

30 | 810 | 100.00% | 100.00% | 3.7628 ×${\mathbf{10}}^{\mathbf{-}\mathbf{5}}$ | 0.0028 | 0.0003 | ||

VGG-19 | 26 | 702 | 100.00% | 100.00% | 8.4398 × ${\mathbf{10}}^{\mathbf{-}\mathbf{6}}$ | 2.1099 × ${\mathbf{10}}^{\mathbf{-}\mathbf{6}}$ | 0.1194 | 0.0003 |

27 | 729 | 100.00% | 100.00% | 3.5763 ×${\mathbf{10}}^{\mathbf{-}\mathbf{8}}$ | 2.3045 ×${\mathbf{10}}^{\mathbf{-}\mathbf{6}}$ | 0.0003 | ||

28 | 756 | 100.00% | 100.00% | 1.7643 ×${\mathbf{10}}^{\mathbf{-}\mathbf{6}}$ | 2.3482 × ${\mathbf{10}}^{\mathbf{-}\mathbf{6}}$ | 0.0003 | ||

29 | 783 | 100.00% | 100.00% | 6.1989 ×${\mathbf{10}}^{\mathbf{-}\mathbf{7}}$ | 2.1430 ×${\mathbf{10}}^{\mathbf{-}\mathbf{6}}$ | 0.0003 | ||

30 | 810 | 100.00% | 100.00% | 0.0006 | 2.0662 × ${\mathbf{10}}^{\mathbf{-}\mathbf{6}}$ | 0.0003 | ||

RESNET-18 | 26 | 702 | 100.00% | 100.00% | 0.0017 | 0.0360 | 0.1123 | 0.0003 |

27 | 729 | 100.00% | 94.44% | 0.0001 | 0.0912 | 0.0003 | ||

28 | 756 | 100.00% | 100.00% | 0.0321 | 0.0219 | 0.0003 | ||

29 | 783 | 100.00% | 97.78% | 6.4626 ×${\mathbf{10}}^{\mathbf{-}\mathbf{5}}$ | 0.0367 | 0.0003 | ||

30 | 810 | 100.00% | 96.67% | 0.0003 | 0.0509 | 0.0003 | ||

RESNET-50 | 26 | 702 | 100.00% | 96.67% | 0.0006 | 0.1680 | 0.2628 | 0.0003 |

27 | 729 | 100.00% | 92.22% | 0.0057 | 0.2466 | 0.0003 | ||

28 | 756 | 100.00% | 92.22% | 0.0086 | 0.1849 | 0.0003 | ||

29 | 783 | 100.00% | 96.67% | 0.0001 | 0.1257 | 0.0003 | ||

30 | 810 | 100.00% | 94.44% | 4.6536 × ${\mathbf{10}}^{\mathbf{-}\mathbf{5}}$ | 0.1453 | 0.0003 | ||

RESNET-101 | 26 | 702 | 100.00% | 92.22% | 7.6751 ×${\mathbf{10}}^{\mathbf{-}\mathbf{5}}$ | 0.2197 | 0.2633 | 0.0003 |

27 | 729 | 100.00% | 93.33% | 0.0005 | 0.1845 | 0.0003 | ||

28 | 756 | 100.00% | 95.56% | 0.0008 | 0.1203 | 0.0003 | ||

29 | 783 | 100.00% | 93.33% | 0.0019 | 0.1500 | 0.0003 | ||

30 | 810 | 100.00% | 93.33% | 0.0006 | 0.1504 | 0.0003 | ||

Proposed BIDA-CNN | 26 | 702 | 100.00% | 100.00% | 0.0019 | 0.0002 | 0.03797 | 0.0003 |

27 | 729 | 100.00% | 100.00% | 0.0031 | 0.0002 | 0.0003 | ||

28 | 756 | 100.00% | 100.00% | 0.0006 | 0.0002 | 0.0003 | ||

29 | 783 | 100.00% | 100.00% | 0.0086 | 0.0002 | 0.0003 | ||

30 | 810 | 100.00% | 100.00% | 0.0017 | 0.0003 | 0.0003 |

Models | Classes | ${\mathbf{S}}_{\mathbf{e}}$ | ${\mathbf{P}}_{\mathbf{r}}$ | $\mathbf{F}1$ | $\mathbf{A}\mathbf{C}\mathbf{C}$ |
---|---|---|---|---|---|

All Models | all snow | 1 | 1 | 1 | 100% |

no snow | 1 | 1 | 1 | ||

partial | 1 | 1 | 1 |

Models | Classes | E | I | AP | |
---|---|---|---|---|---|

- | Training | Testing | |||

VGG-16 | all snow | 30 | 19,530 | 0.00 | |

no snow | 0.21 | ||||

partial | 0.01 | ||||

VGG-19 | all snow | 30 | 19,530 | 0.00 | |

no snow | 0.14 | ||||

partial | 0.00 | ||||

Proposed BIDA-CNN | all snow | 30 | 19,530 | 0.71 | |

no snow | 0.72 | ||||

partial | 0.87 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Al-Dulaimi, A.A.; Guneser, M.T.; Hameed, A.A.; Márquez, F.P.G.; Fitriyani, N.L.; Syafrudin, M.
Performance Analysis of Classification and Detection for PV Panel Motion Blur Images Based on Deblurring and Deep Learning Techniques. *Sustainability* **2023**, *15*, 1150.
https://doi.org/10.3390/su15021150

**AMA Style**

Al-Dulaimi AA, Guneser MT, Hameed AA, Márquez FPG, Fitriyani NL, Syafrudin M.
Performance Analysis of Classification and Detection for PV Panel Motion Blur Images Based on Deblurring and Deep Learning Techniques. *Sustainability*. 2023; 15(2):1150.
https://doi.org/10.3390/su15021150

**Chicago/Turabian Style**

Al-Dulaimi, Abdullah Ahmed, Muhammet Tahir Guneser, Alaa Ali Hameed, Fausto Pedro García Márquez, Norma Latif Fitriyani, and Muhammad Syafrudin.
2023. "Performance Analysis of Classification and Detection for PV Panel Motion Blur Images Based on Deblurring and Deep Learning Techniques" *Sustainability* 15, no. 2: 1150.
https://doi.org/10.3390/su15021150