Next Article in Journal
An Analysis of the Altitude Impact on Roots Compressor Operation for a Fuel Cell System
Previous Article in Journal
Analysis of the Impact of Vibrations on the Driver of a Motor Vehicle
Previous Article in Special Issue
Using Laser Profilometry to Investigation FDM Printing Parameters for Outer-Perimeter Analysis and Surface Quality Improvement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

An Overview of CNN-Based Image Analysis in Solar Cells, Photovoltaic Modules, and Power Plants

by
Dávid Matusz-Kalász
*,
István Bodnár
and
Marcell Jobbágy
Institute of Physics and Electrical Engineering, University of Miskolc, 3515 Miskolc-Egyetemváros, Hungary
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(10), 5511; https://doi.org/10.3390/app15105511
Submission received: 31 March 2025 / Revised: 10 May 2025 / Accepted: 13 May 2025 / Published: 14 May 2025
(This article belongs to the Special Issue Technical Diagnostics and Predictive Maintenance)

Abstract

:
In this paper, we present the latest research results on the analysis of images taken during the condition assessment of solar cells and solar power plants. We aimed to summarize the most recent articles for 2024 and 2025. The annual volume of solar panels produced is expected to increase in the future. As imaging condition assessment technologies develop, the convolutional neural network models must follow this trend. In the field of real-time detection, CNN models will play an extremely important role because the faster any potential faults are identified, the quicker the response time during manufacturing and PV plant inspections. As part of CNN implementation in large PV power plants, IR and RGB imaging modes are very useful to detect failure sources. While IR imaging is useful in detecting heating from faults within PV panels or from nearby wiring, RGB imaging can detect mechanical defects such as broken glass planes, discolorations, and delamination. The implementation of these thus provides a higher chance of detecting solar panel damage and PV farms’ performance degradation or possible failure, resulting in a reduction in power generation interruptions. This will also allow faster and more efficient intervention and decision-making by operators in case of problems.

1. Introduction

Conventional fossil fuel supplies could run out within a few decades [1,2,3], and their prices could easily rise in times of military and economic competition, as we have seen in recent years [4]. Humanity must do its utmost to develop and deploy renewable energy sources as widely as possible. In the last decade, solar and wind energy capacity has expanded at the fastest rate in the world [5]. In Hungary, the rapid expansion of solar energy continued in 2024. The capacity of industrial solar power plants has increased to 3478 megawatts, and nearly 260,000 residential solar systems are already in operation with a total capacity of 2362 megawatts. With an additional 260 megawatts of captive solar capacity, the total solar capacity has reached 6100 megawatts. The National Energy and Climate Plan originally set a target of 6000 megawatts by 2030 [6].
The focus of this article is to examine the possibility of supporting the quality assurance of PV modules and systems from production through installation to maintenance with the use of artificial intelligence. Nowadays, the production of photovoltaic (PV) modules is a highly automated, mechanized process that requires a certain degree of precision. However, the pressure from the necessity of high production metrics acts as a detriment to this precision and the quality that goes with it [7].
According to Ref. [8], in China alone, the installed solar capacity has increased by several gigawatts per year over the last decade, while the global demand has required the production and installation of hundreds of millions of solar panels per year. With such a high production volume, it is very difficult to minimize the number of faults affecting electrical performance. The quality assurance process for solar cell production includes checking the condition of the panels while they are still in the factory. Furthermore, condition assessments are necessary even after the solar system or power plant has been built [7,8,9,10,11,12].
Condition assessments may be affected by inadequate or for some known reason altered electricity generation rates, may take place during installation to collect reference data for warranty validation, or may be part of a predefined maintenance process [13,14]. Usually, a team of experts will visit the plant on the ground during a visual inspection, but the number of defects able to be observed by the naked eye is small. To increase efficiency, the use of a thermal camera is essential [15,16], and also provides the possibility of creating predictive models, as in the study by Parenti et al. [17]. The process can be accelerated by using a UAV (unmanned aerial vehicle) drone equipped with a thermal camera [18,19,20,21,22]. It should be noted that some power plants prohibit the use of drones for safety reasons. Thermal imaging is an excellent method to detect hot-spot failures and bypass diode failures patchwork failures, and this method can be automized [23]. Once a faulty module has been identified, it is possible to measure electrical parameters and record I-V curves, even for daylight outdoor electroluminescence studies, for example with special InGaAs cameras [24,25,26,27,28]. Condition surveys typically cover the entire solar plant. An all-encompassing health check allows for the condition of solar cables, connectors, inverters and transformers to be checked, in addition to the solar modules. Random inspections should be carried out every six months to once a year, and full condition surveys every three to five years. The frequency of maintenance is influenced by climate or even air pollution levels. Regular cleaning can reduce the effect of the latter [29].
Imperfections within PV modules can be vastly different, ranging from physical to chemical and structural anomalies. One of the most common problems is the presence of microcracks [30], which are often invisible to the naked eye. These defects usually result in a small energy loss, but if left unchecked, they can spread and lead to failures. In addition, the materials within will oxidize and corrode with time, and the EVA adhesive can delaminate as well, acting as accelerative factors to deterioration. The environment can also cause defects with hot spots, shadows from vegetation or buildings and other surface contaminations, such as dust subsidence. In addition to adversely affecting the energy production of PV systems, hot spots can reduce the lifetime of PV modules [31,32]. To maximize PV power production potential, it is crucial to ensure the panel’s optimal performance and longevity.
It is also worth noting that experience has shown that damage can occur during the transport and installation of solar panels [32]. This damage can be either through cracks and fractures that were not observed during manufacture, or an amplification of defects that were already present at the time of manufacture. Electroluminescence testing is one of the more visual ways of detecting these defects and damage. In addition, they can be detected by measuring electrical parameters, but EL testing is one of the best ways to determine the exact location and type of defects [31]. Human interpretation and analysis of the resulting image is feasible for small panels and low sample numbers, but mass analysis of large sample numbers and large complex images is a lengthy and tedious human operation. Hence, the categorization and analysis of solar electroluminescence images using artificial intelligence has become a popular research topic in the scientific community in recent years.
Artificial intelligence (AI) is a technology that allows machines to demonstrate human-like reasoning and capabilities, such as autonomous decision-making. Through the assimilation of vast amounts of training data, AI learns to recognize speech, probing and trends, proactively solve problems, and predict future circumstances and events. AI is not a single technology, but a set of technologies that can be combined to perform different types of tasks. These tasks can be very specific, such as solving language challenges or providing tourist assistance or even image analysis [33,34].
There are several types and levels of artificial intelligence. Three main groups can be interpreted in terms of AI capabilities: narrow, general and super-intelligent AI. In terms of levels, they can be divided into several interrelated categories: machine learning, neural networks, deep learning and generative AI. AI is at the forefront of robotics, computer vision and language processing and has been applied in many sectors such as banking, healthcare, commerce and manufacturing [33,34]. In recent years, with the evolution of AI technologies, CNNs have risen up to be a contender for the most effective tools for crack detection. They also play a key role in the improvement of PV module inspections in terms of accuracy and efficiency [31].
Convolutional neural networks (CNNs) are deep learning algorithms that are specifically designed to process grid-based data. They have revolutionized image analysis tasks, largely due to advances in the computational power of electronic devices (CPUs and GPUs) and the availability of big data on the Internet [35]. They are excellent for object recognition and detection. Furthermore, CNNs are able to detect repeating patterns and structures in images, making them suitable for identifying crack patterns. Some material properties, such as brittleness or hardness, can affect the shape and propagation of cracks. CNNs have achieved significant success not only in defect detection in solar cells, but also in other material science applications, such as face and object recognition (e.g., security systems, facial recognition), medical image diagnostics (automatic analysis of CT and MRI images), and vehicle cameras (object identification, traffic signage) [31,35,36,37].
In preparing this study, we have tried to focus on reviewing the most recent articles possible, predominantly from 2025 and 2024 as well as, for comparison, additional studies from 2020 to 2023 related to the study of solar cells, solar panels and solar power plants, and well-constructed summary articles. The criteria for choosing were designed to cover the widest area of CNN models and to have an ample amount of classification and detection-type models. The highlighted articles had to be well documented in terms of models and results [38,39,40,41]. The aim was to examine the typical trends in terms of the imaging types examined (EL, IR, etc.), the purpose of use of the models (detection, classification, etc.), and the subject of the study (solar cell, PV module, power plant).

2. Imaging Methods and CNN Processing

2.1. Inspection Methods of Solar Cells and Modules

Since CNNs are optimized for image analysis, the models need to be provided with some kind of record of the state of the solar panels. Furthermore, a large number of samples is needed for learning. There are basically three imaging modalities to choose from for the on-site and laboratory condition assessment of solar cells or panels:
  • Electroluminescence (EL) [42];
  • Photoluminescence (PL) [43];
  • Thermal imaging (IR) [44].
Electroluminescence should not be confused with photoluminescence, although there are similarities between the two techniques. Both are based on the phenomenon of luminescence, but differ in the mode of excitation and the information content. In photoluminescence, the solar cell is illuminated by a laser or LED light, whose energy excites electrons, which are then excited to higher energy levels. When the electrons return to their ground state, they emit photons that can be observed with a camera or spectrometer suited to the purpose. The study provides information on the material quality and crystal structure defects of the PV cell. Electroluminescence, on the other hand, is a phenomenon where a material emits light in response to an electric current or electric field. When voltage is applied to the PV cell, photons are emitted. Figure 1 shows a schematic view of the EL test. Al Otum [45] presented the difference between EL images of polycrystalline and monocrystalline solar cells in his publication. All cases are true where the emission is low or no photons can be detected at all, where a fault in the operation of the solar cell is suspected [46,47,48,49,50]. The wavelength of the emitted photons is in the near-infrared (NIR) range between 1000 and 1300 nm and its peak is around 1150 nm in the case of crystalline Si cells. Therefore, it cannot be detected by the human eye or by digital cameras made for everyday use [51].
EL tests can provide a comprehensive and highly visual picture of the condition of an entire solar module. Nowadays, the technique can be used not only for silicon cells but also for perovskite and organic cells [52]. EL testing is beneficial when looking for structural or electrical problems, such as cracks, contact defects, or recombination anomalies. The end result of the test is a digital photograph. The EL test can be performed in both the opening and closing bias of the solar panel. Significant differences can be detected between the two methods. Open-end testing is more common and is therefore the standard practice for in-process quality assurance and condition assessment. Furthermore, efforts have been made to describe the relationship between EL images and the electrical parameters that can be measured [53,54,55,56,57].
Figure 2 shows the typical defects that can be identified by EL testing [45,46,47,48,49]:
(a)
Crack;
(b)
Fracture;
(c)
Soldering failure;
(d)
Hotspot;
(e)
Finger interruption;
(f)
Tabbing disconnection;
(g)
Material defect;
(h)
Edge defect (contamination during the silicon ingot growth);
(i)
Corrosion;
(j)
PID phenomenon;
(k)
Black core;
(l)
Backsheet scratch.

2.2. CNNs

The application of CNNs is already widespread within vision-based industrial inspection systems. CNN inspection tasks can be divided into defect classification and defect localization [35,38]. The analysis of EL images of solar cells and modules is an advanced image processing and machine learning task. Depending on the needs of the task, one of the following architectures has to be chosen [58,59,60,61,62]:
  • Classification: A simple and quick to learn process. The goal is to classify the quality (e.g., “defective” vs. “flawless”) of solar cells. Some solutions to the task are Custom CNN, VGG16, VGG19, ResNet50, ResNet101, MobileNetV2, and EfficientNet [63,64,65,66,67,68].
  • Fault localization (segmentation): a more complex process, which means the precise (pixel-wise) determination of the location of faults within a module. Examples include, but are not limited to, U-Net, SegNet, DeepLabV3/V3+, PSPNet, and HRNet [69].
Defect localization requires greater accuracy than identification. To construct a defect localization model, and thus to find the location of a defect, an encoder and a decoder have to be used together and low-level features have to be fused together in the decoding process to generate a high-resolution feature map. The design improvements of feature extractors and image fusion applications are hot research topics in defect location. One such widely used encoder–decoder architecture is U-Net [35].
Object Detection means the automatic detection and categorization of different types of defects, e.g., YOLOv5, YOLOv8. The YOLO (You Only Look Once) model is also used to analyze EL images of solar cells, which is a third direction, complementing the two main categories. It is not a classification because it does not just tell whether there is a defect or not. It is also not a segmentation because it is not pixel-accurate, but it is close to segmentation because it also performs positioning. It uses bounding boxes to determine the location of a fault, e.g., it draws around cracks. A big advantage is real-time processing, as it is very fast in an ideal industrial environment [70,71,72,73,74,75,76,77]. Another special category is anomaly detection, where unknown defects are identified for the model, which are not part of the training data. It is significantly different from traditional classification or segmentation, e.g., Variational Autoencoder (VAE), AnoGAN, or One-Class SVM [78,79].
In defect classification, CNNs have several components that are pivotal to their functions. One such component is the convolution layer, which contains and then applies the learning-capable filters (kernels) to the input data. Every single filter is applied to the input image to identify peculiar features. The first layers are tasked to find the edges, while the latter layers recognize more intricate patterns [31,36,37]. This is followed by pooling layers, which reduce the size of the data, thus reducing the complexion of the computation but still retaining most of the relevant information. Some prevailing pooling operations are max and average pooling. Nonlinear activation functions are placed on the layers’ outputs, such as Rectified Linear Units (ReLUs), which bring nonlinearity into the network, allowing complex relationships to be learned. In the later stages of the network, there are usually fully connected layers that connect each neuron to all neurons in the next layer, ensuring high-level decision-making [31].
The following metrics—accuracy, precision, recall, F1-score—play a key role in evaluating the performance of classification models, especially when the dataset is not perfectly balanced (e.g., rare errors vs. frequent goodness) [44,73,77,78,80,81,82,83,84,85,86].
Accuracy shows and calculates the rate of correct forecasts (true positive and true negative) in relation to every prediction. This is a good overall measure, but not enough if the data are unbalanced. The calculation of accuracy is shown in Equation (1) [78]:
A c c u r a c y = T P + T N T P + F P + T N + F N
Recall (or Sensitivity) quantifies the proportion of actual positive cases that were correctly identified. This is important, because it helps in the avoidance of overlooked mistakes. Equation (2) shows how to obtain the Recall value [78]:
R e c a l l = T P T P + F N
Precision indicates the relative amount of forecast positive cases to true positives. It is useful when false alarms (false positives) are costly (e.g., scrapping intact cells marked as false). Precision can be calculated from Formula (3) [78]:
P r e c i s i o n = T P T P + F P
The F1-Score shows the harmonic mean of the previously calculated precision and recall values, therefore balancing out its constituents. It is useful when having to decide what is more important: false alarms or missed errors. The F1-Score can be determined following Formula (4) [78]:
F 1 s c o r e = 2 · P r e c i s i o n · R e c a l l P r e c i s i o n + R e c a l l ,
where
TP—True Positive means cases correctly identified as positive.
FP—False Positive means cases incorrectly identified as positive.
TN—True Negative represents cases correctly identified as negative.
FN—False Negative represents cases incorrectly identified as negative.
Intersection over Union (IoU) and Mean Average Precision (mAP) are key metrics in the evaluation of object detection and segmentation. IoU measures the ratio of overlap between estimated and actual marker boxes or masks, which is critical for localization accuracy. The mAP is the average of the mean accuracies calculated for different classes, taking into account the quality of detection at multiple IoU thresholds. These metrics are essential to quantitatively compare the performance of models such as YOLO, Faster R-CNN, Mask R-CNN, or U-Net.
The calculation of the IoU is as follows [77]:
I o U = A p r e d     A g t A p r e d     A g t ,
where
A p r e d : the predicted (estimated) area;
A g t : the ground truth (real) area.
The calculation of the mAP is as follows [73]:
m A P = 1 N i = 1 N A P i ,
where
N : the number of classes;
A P i : the average precision of the i-th class.
CNNs require significant computational power to learn, especially for large image datasets. Although small models can be run on CPU and integrated GPU alone, the use of an external, more powerful GPU is highly recommended to drastically reduce the learning time. NVIDIA RTX 3060/3070 are recommended for development and medium-sized projects, and RTX 3090 or A100 for advanced use. These GPUs are mentioned because the latest developments in these cases are focused on AI, especially in the 50 series. On the CPU side, at least Intel i7 or AMD Ryzen 7 with 16–32 GB RAM is recommended. There are also cloud-based alternatives (e.g., Google Colab Pro, AWS, Azure) which are also ideal for scalable resources [87].
There are several powerful environments for the development of CNNs. Among the most popular are TensorFlow v2.18 and Keras v3.0, which offer a high-level API for rapid prototyping. PyTorch v2.4 is popular in research with its more dynamic, flexible code structure. Python v3.11, the language on which they are based, is easy to learn at a beginner level, while offering the potential for intermediate to advanced deep learning applications. Google Colab v3.8 provides a cloud-based option for GPU-supported learning, while Jupyter Notebook v7.2 is excellent for interactive coding. In addition, ONNX v1.16, FastAI v2.7.14, or even the MATLAB R2024a Deep Learning Toolbox can be important tools for specific tasks [87].

3. Exhibition of Recent Literature

In this section, the reader can observe the results and trends in recent CNN-based research. Overall, 27 studies were highlighted and examined in more detail, 12 from 2025 and 15 from 2024. These studies were chosen from open access journals, mainly MDPI, Elsevier, and Springer. The researchers predominantly used CNN models to solve classification and detection problems. Several studies have presented not only the analysis of cell or panel images, but also the results of condition assessments of entire power plants [68]. In such cases, IR as well as RGB images were used as samples.
RGB samples can be refined to detect and catalog different faults, such as soiling (dust, bird droppings, etc.), partial or full shading, burnt spots, delamination, discoloration, glass ruptures, and cracks. Going further, the simultaneous use of RGB and IR cameras during aerial inspections has been proven to be an improved plant diagnostics solution. It offers a faster and easier classification of faults in comparison to using only one image type. Even though an additional RGB camera increases the cost of the UAV platform, demonstrations have shown that this cost is offset by the reduction in inspection times and workmanship costs [62].

3.1. Literature Examination of EL Images from 2025

From 2025, 12 studies were examined in more detail, of which 6 studies examined EL, 3 studies examined IR, 2 studies examined RGB images, and 1 study examined point clouds. The main data from the publications are summarized in Table 1. Karakan’s [65] study uses deep learning to detect solar cell defects. Monocrystalline and polycrystalline cells are classified into three categories (intact, cracked, broken) using seven different CNN architectures. The best accuracy was achieved by SqueezeNet: 97.82% (monocrystalline), 96.29% (polycrystalline). The paper by Wang et al. [73] presents the MRA-YOLOv8 model, which integrates new elements (MBCANet, ResBlock, AMPDIoU) to significantly improve PV defect detection on complex backgrounds. The mAP50 of the method is 91.7% (PVEL-AD) and 69.3% (SPDI), outperforming both YOLOv8 and DETR. The study by Laot et al. [88] accelerates solar characterization based on EL imaging by machine learning. They use MLP and a modified U-Net (mU-Net) model to regress local parameters Rs and J0. The accuracy of MLP almost matches that of the classical NLLS algorithm but is significantly faster. The proposed approach provides an efficient solution for the fast processing of large data cubes. The study by Chen et al. [89] presents a lightweight, efficient LMFF (Lightweight Multiscale Feature Fusion) network for segmenting solar cell defects based on EL images. The network uses multilevel feature fusion and an observation mechanism to accurately detect small targets. It was validated on three databases, including mono- and polycrystalline silicon cells. The results show that the LMFF network is more efficient than classical segmentation mod-cells. A paper published by Demir [90] presents a CNN-based EL image analysis system for the automatic detection of solar cell defects. Using the RSWS classifier, an accuracy of 98.17% for two-class and 97.02% for four-class problems on the ELPV dataset was achieved with outstanding results. The research work of Li et al. [69] proposes a new approach for micro-injury segmentation based on EL polarization images. They collect data through three polarization channels (I, DOP, Q) and use an improved U-Net model to improve microcrack detection.

3.2. Literature Examining Other Imaging Technologies from 2025

The main data from the analytics of publications from 2025 that involved IR and RGB images are summarized in Table 2. Firstly, da Silveira Junior et al. [79] compared three data clustering methods (GT, GAN, DDPM) combined with CNNs for error detection on IR images. The combination of DDPM + CNN achieved 89.83% accuracy for 11 error categories, outperforming GAN and geometric transformations. The study by Qureshi et al. [66] presents error detection based on thermal maps, especially in the context of climate change-induced overheating. The best performance was achieved with a proprietary CNN ensemble that diagnosed the six error types with 100% accuracy and F1-score. The study by Thakfan and Salamah [91] presents a hybrid AI-based approach that combines thermal images and I-V curves for fault detection. The transfer learning model achieves over 98% accuracy and recall, demonstrating the benefits of using both types of data together. The study by Tang et al. [92], while not dealing with solar cells, deals with a very exciting topic: the 3D detection of lamination defects in AFP manufacturing. The PointNet++ architecture is used to detect defects from a 3D point cloud with an IoU accuracy of more than 72% and real time (<1 s) performance. Noura et al. [68] present a two-step classification scheme that uses a combination of deep learning models (CNN, ViT, EfficientNet, etc.) to detect solar cell defects and contamination in RGB images (1131 pictures). The hierarchical model is more accurate than previous one-step approaches and performs well in real-world environments. The system separates physical and electrical damage, as well as pollution caused by dust, snow, and bird droppings. The YOLOv8-based PSA-det model by Gao et al. [77] enables fast and accurate fault detection on PV modules. The ShapeIoU metric, PSA observational mechanism, and MCFF module together resulted in mAP50 values of 87.2% (Panel-2) and 72.0% (Solar) with low latency (2.6 ms).

3.3. Literature Examining EL Images from 2024

From 2024, 15 studies were examined in detail, of which 9 studies examined EL, 3 studies examined IR, and a further 3 studies examined RGB images. The main data of the publications are summarized in Table 3. Zaman et al. [63] present a proprietary CNN architecture, PV-faultNet, for the error detection of EL images. The model achieved a validation accuracy of 91.67%, a precision of 91%, and a recall of 90%. It focused on the automatic detection of manufacturing defects due to thermal stress. Samrouth et al. [67] proposed a Dual CNN architecture for detecting micro defects in EL images. It uses two CNN branches in a pairwise fashion—one shallow and the other deep—to extract low- and high-level features. The method is compared with several deep learning architectures (AlexNet, VGG19) and the dual network approach shows outstanding results. A paper by Ding et al. [72] presents an improved YOLOv5-based detection model using Focal-EIoU loss function and cascade structure to handle EL errors. The model also achieves excellent mAP values for different YOLOv5 versions (e.g., YOLOv5x: 0.867) and improves the detection of small and rare errors. Chen et al. [75] use a YOLOv8-based attention network on EL images, targeting the detection of small errors. The model can detect different error categories (e.g., corner, scratch, printing error), and has a mAP50 of 0.779 and F1-score of 0.697—providing outstanding performance on industrial data. Lang’s paper [76] presents a new CCT model based on YOLOv8 that integrates a transformer module and a PSA monitoring mechanism to detect PV errors in EL images. The model achieved 77.9% mAP50 accuracy on PVEL-AD data with real-time performance. Using PSA and Transformer resulted in a 17.2% improvement over the baseline model. The study by Demirci et al. [78] combines GAN-based data clustering with fine-tuned CNN models for error detection on EL images (combined database of over 98,000 images). On the dataset augmented with WGAN-GP, 94.11% accuracy was achieved (ELPV test), while 93.08% accuracy was achieved on the self-collected images. The system offers a resource-efficient solution for small, unbalanced datasets. Al-Otum [82] presents a multi-model approach for the automatic classification of errors based on EL images (2624 EL images). They consider three CNN-based networks: LwNet (lightweight custom model), and transfer learning versions of SqueezeNet and GoogleNet. Class 4 and 8 classifications were performed on the ELPV database. The best result was achieved by the proprietary LwNet, with 96.2% accuracy. Yousif and Al-Milaji [93] present a hybrid deep learning model for error detection based on EL images that combines HoG and CNN features. The model achieved better accuracy compared to six other methods. The goal is to combine manual and automatic features for more accurate PV fault detection. The research of İmak [94] proposes a CNN-PCA-SVM-based hybrid model for the automatic fault class detection of EL images. Using MobileNetV2, DenseNet201, and InceptionV3 networks, they achieved 92.19% accuracy, 92% precision, 90% recall, and 91% F1-score. The method aims to achieve feature optimality with PCA and efficient error identification.

3.4. Literature Examining Other Imaging Technologies from 2024

The main data from the publications analyzing the 2024 IR and RGB images are summarized in Table 4. The paper by Sinap and Kumtepe [18] presents an error-detecting CNN model based on a large-scale IR image database (20,000 IR images) trained with real solar power plant images. It aims to identify 12 types of defects, such as hotspot, shadowing, and cracking. The authors investigated various image processing and hyperparameter optimization techniques (e.g., Optuna, Bayesian Optimization) to improve the model performance. Gopalakrishnan et al. [95] propose the NASNet-LSTM model for PV anomaly detection based on IR images. The method uses the largest open IR dataset with 12 classes (11 fault types + 1 normal) and achieves a classification accuracy of 84.75%, demonstrating the effectiveness of the hybrid approach in detecting electrical and non-electrical faults. In the research work of Zaghdoudi and Fargani [44], a CNN-based feature extraction and SVM-based class-classification hybrid model is presented to detect 12 types of solar faults in IR images (20,000 IR images). The system learns from a large dataset of 20,000 images and achieves an outstanding 92% classification accuracy. The focus is on automated PV module health monitoring. A study by Sridharan et al. [96] investigates error detection based on RGB images collected from UAVs using AlexNet features and a voting-based ensemble method. The two-class ensemble (SVM + KNN) achieved 98.3% accuracy, outperforming the results of individual classifiers. The goal is to detect common visual errors (e.g., refraction, discoloration). Rodriguez-Vazquez et al. [20] implemented CenterNet-based key point detection to detect solar panels in UAV images. The goal was to use field-level automated analysis to quickly and accurately identify the position of modules in visual images, in preparation for two-step fault analysis. The research of Ledmaoui et al. [97] aims to apply a VGG16-based CNN model to error detection in visual (RGB) images (599 images, augmented), such as dust, bird droppings, or shading. An interface with PyQt5 was developed to support user decision-making. The system F1-score was 91.67%.

3.5. Model Performance Metrics

Looking more closely at the metrics describing model effectiveness, it can be found that all the studies reported good results, with the results of the 2025 publications summarized in Table 5. Results above 98% have been reported by Qureshi et al. [66], Laot et al. [88], and Demir [90] for 2-class 98.17%, and Thakfan [91]. Karakan [65] reported results above 95% for both poly- and monocrystalline cells. Furthermore, the highest mAP value was reported by Wang et al. [73], who reported 91.7%, and Noura et al. [68] reported the highest IoU value of 95%.
The results of the 2024 publications are summarized in Table 6. Results above 98% have been reported by Sridharan et al. [96] and Ledmaoui et al. [97]. Al-Otum [82] reported results above 95% by LwNet: 96.2% accuracy. The highest mAP values was reported by Ding et al. [72], and were 85.7% (YOLOv5m), 86.5% (YOLOv5s), and 86.7% (YOLOv5x).
If one were to take a look at research before 2024, one would see that both cell- and panel-level EL images were investigated (Lin et al. [64], Hassan et al. [98]). YOLOv3-v4-v5 models were already very popular at that time, with good results reported by Starzyński et al. [19], Gao et al. [43], Binomairah et al. [70], and Shihavuddin et al. [71]. At the same time, there were examples of custom CNN development by Fu et al. [35], with 98.39% accuracy; Lin et al. [64] with 99.36% precision and 98.77% recall; and Chen et al. [81] with 95.28% accuracy. After studying the available literature and reviewing the published results, the classification and localization of cracks and fractures using CNN models can be found to be highly effective. Researchers have usually used large-scale samples to train the models; in Parikh et al. [80], in 2020, around 46,000 EL images of solar cells were used, most of which were from open access databases. There have also been studies that used images from their own collection, such as Shihavuddin et al. [71] in 2021.

4. Discussion

In most cases, the subject of application for CNN models was provided by images of solar cells. In studies by Karakan [65], Wang et al. [73], Laot et al. [88], Demir [90], Zaman et al. [63], Samrouth et al. [67], Chen et al. [75] Lang and Lv [76], Demirci et al. [78], Al-Otum [82], Yousif and Al-Milaji [93], İmak [94], and Zaghdoudi and Fargani [44], models were used for both detection and classification tasks, and most often for crack defects. Photovoltaic modules were examined by Li et al. [69], Chen et al. [89], da Silveira Junior et al. [79], Qureshi et al. [66], Ding et al. [72], Sinap and Kumtepe [18], and Sridharan et al. [96]. Furthermore, the power plant level application of CNN was presented by Thakfan and Salamah [91], Noura et al. [68], Gao et al. [77], Gopalakrishnan et al. [95], Rodriguez-Vazquez et al. [20], and Ledmaoui et al. [97].
As part of CNN implementation in large PV power plants, IR and RGB imaging modes are very useful to detect failure sources and thus have a higher chance of detecting solar panel damage and PV power plants’ performance degradation or possible failure, resulting in power generation interruptions. This will also allow faster and more efficient intervention and decision-making by operators in case of problems. It is even more important that the detection is quick and accurate. The use of UAVs has made it possible to carry out a rapid condition assessment, which could even be improved. It should also be taken into account that imaging technologies may also improve, providing researchers with better-quality databases. Data augmentation is a key tactic in fault detection, especially when it comes to deep learning used for solar cell analytics.
The photovoltaic industry is expected to grow even faster than what has been seen so far. Therefore, it is essential that the development of CNN technologies follow this trend. Real-time detection is believed to play an extremely important role in this because the faster any potential faults are identified, the quicker the response time during manufacturing and PV plant inspections. Studies by Wang et al. [73] and Gao et al. [77] numerically demonstrate that the detection of scratches and cracks remains below 85% mAP, while in the case of black core, finger interruption, and broken grid, 85% to 90% mAP can be achieved. Noura et al. [68] found that YOLO methods, unlike hierarchical classification, are well suited for detection and classification even for UAV-based images.

5. Conclusions

Over the past decade, the research community has rapidly increased their productivity within automated quality control systems with deep CNN models. In our literature review, we found that research has tended to focus on supervised CNN-based models and only a handful of studies on unsupervised CNNs were observed. Out of the 27 reviewed articles, 25 were supervised, accounting for 92.5% of the total data. However, the training of any CNN model requires a large amount of data to guarantee the expected high accuracy and energy yield values. As a critical aspect, locating and subsequently replacing or repairing faulty solar panels is important because panel failures can affect the efficiency of increasingly crucial PV power plants. These facilities are built at a significant financial cost, and in many countries around the world they have a large share of electricity production, which is expected to increase in the future.
Generally, RGB images are used to identify the physical location of PV strings on field, while IR cameras are the ones used to find faults within the units. The performance-enhancing effect of UAV drones during fault localization must be highlighted again. After that, the faults can be divided into visible defects and those invisible to the naked eye. Invisible defects can be further investigated with PL or EL tests, or even by examining electrical parameters. Every step of the process can be controlled with deep learning, including CNN models. Furthermore, all reviewed studies reported performance metrics (accuracy, recall, etc.) above 90% and IoU and mAP values above 70%, according to the models used.
The reviewed articles show that YOLO models are well suited for detection and classification tasks, but underperform in classification. At the same time, YOLO models provide real-time data processing and are well suited for PV power plant condition assessments. It is our view that the two main topics for future work on artificial intelligence algorithms and solar cell fault detection will remain the application of new versions of YOLO algorithms and the design of custom CNN models. Furthermore, the continuation of intensive research on these two types of models could lead the scientific community to better performance and lower power consumption methods for solar panel failure detection. As the designated crucial challenges are addressed, the need for automated optical examination is expected to grow in the future, both in academia and in the industry.

Author Contributions

Conceptualization, I.B. and D.M.-K.; methodology, D.M.-K.; resources, D.M.-K.; data curation, D.M.-K.; writing—original draft preparation, D.M.-K. and M.J.; writing—review and editing, I.B. and M.J.; visualization, D.M.-K.; supervision, I.B.; project administration, D.M.-K.; funding acquisition, I.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

Supported by the University Research Scholarship Program of the Ministry for Culture and Innovation from the National Research, Development and Innovation Fund.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIartificial intelligence
CNNconvolutional neural network
CPUCentral Processing Unit
CTcomputerized tomography
DDPMDenoising Diffusion Probabilistic Model
ELelectroluminescence
GANGenerative Adversarial Networks
GPUGraphics Processing Unit
IoUIntersection over Union
IRinfrared
LEDlight-emitting diode
LMFFLightweight Multiscale Feature Fusion Network
mAPMean Average Precision
MRImagnetic resonance imaging
NIRnear-infrared
PIDPotential Induced Degradation
PLphotoluminescence
PVphotovoltaic
ReLURectified Linear Units
RGBRed Green Blue
SVMsupport vector machine
UAVunmanned aerial vehicle
VAEvariational autoencoder
VGGVisual Geometry Group
YOLOYou Only Look Once

References

  1. Soeder, D.J. Fossil Fuels and Climate Change. In Fracking and the Environment; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
  2. Holechek, J.L.; Geli, H.M.E.; Sawalhah, M.N.; Valdez, R. A Global Assessment: Can Renewable Energy Replace Fossil Fuels by 2050? Sustainability 2022, 14, 4792. [Google Scholar] [CrossRef]
  3. Wang, J.; Azam, W. Natural resource scarcity, fossil fuel energy consumption, and total greenhouse gas emissions in top emitting countries. Geosci. Front. 2024, 14, 101757. [Google Scholar] [CrossRef]
  4. Borowski, P.F. Mitigating Climate Change and the Development of Green Energy versus a Return to Fossil Fuels Due to the Energy Crisis in 2022. Energies 2022, 15, 9289. [Google Scholar] [CrossRef]
  5. Yolcan, O.O. World energy outlook and state of renewable energy: 10-Year evaluation. Innov. Green Dev. 2023, 2, 100070. [Google Scholar] [CrossRef]
  6. Government of Hungary. National Energy and Climate Plan of Hungary. Available online: https://cdn.kormany.hu/uploads/document/5/54/54b/54b7fc0579a1a285f81d183931bfaa7e4588b80e.pdf (accessed on 31 March 2025).
  7. Di Sabatino, M.; Hendawi, R.; Garcia, A.S. Silicon Solar Cells: Trends, Manufacturing Challenges, and AI Perspectives. Crystals 2024, 14, 167. [Google Scholar] [CrossRef]
  8. Chen, Y.; Zhou, J.; Ge, Y.; Dong, J. Uncovering the rapid expansion of photovoltaic power plants in China from 2010 to 2022 using satellite data and deep learning. Remote Sens. Environ. 2024, 305, 114100. [Google Scholar] [CrossRef]
  9. Zhang, X.; Chen, Y.; Wang, J.; Huang, Y.; Zhao, X. China’s renewable energy expansion and global implications. Renew. Sustain. Energy Rev. 2022, 161, 112386. [Google Scholar] [CrossRef]
  10. Liu, W.; Chen, W.; Zhao, X.; Xu, J.; Wang, C. Solar energy development in China—A review. Energy Rep. 2021, 7, 3816–3826. [Google Scholar] [CrossRef]
  11. Li, J.; Zhang, H.; Wang, Y.; Zhou, M. Forecasting solar power capacity in China using hybrid machine learning models. J. Clean. Prod. 2023, 398, 136394. [Google Scholar] [CrossRef]
  12. Tang, W.; Yang, Q.; Hu, X.; Yan, W. Edge Intelligence for Smart EL Images Defects Detection of PV Plants in the IoT-Based Inspection System. IEEE Internet Things J. 2023, 10, 3047–3056. [Google Scholar] [CrossRef]
  13. Iftikhar, H.; Sarquis, E.; Branco, P.J.C. Why Can Simple Operation and Maintenance (O&M) Practices in Large-Scale Grid-Connected PV Power Plants Play a Key Role in Improving Its Energy Output? Energies 2021, 14, 3798. [Google Scholar] [CrossRef]
  14. Høiaas, I.; Grujic, K.; Imenes, A.G.; Burud, I.; Olsen, E.; Belbachir, N. Inspection and condition monitoring of large-scale photovoltaic power plants: A review of imaging technologies. Renew. Sustain. Energy Rev. 2022, 161, 112353. [Google Scholar] [CrossRef]
  15. Buerhop, C.; Bommes, L.; Schlipf, J.; Pickel, T.; Fladung, A.; Peters, I.M. Infrared imaging of photovoltaic modules: A review of the state of the art and future challenges facing gigawatt photovoltaic power stations. Prog. Energy 2022, 4, 042010. [Google Scholar] [CrossRef]
  16. Garzón, A.M.; Laiton, N.; Sicachá, V.; Celeita, D.F.; Le, T.D. Smart equipment failure detection with machine learning applied to thermography inspection data in modern power systems. In Proceedings of the 2023 11th International Conference on Smart Grid (icSmartGrid), Paris, France, 4–7 June 2023; pp. 1–5. [Google Scholar] [CrossRef]
  17. Parenti, M.; Fossa, M.; Delucchi, L. A model for energy predictions and diagnostics of large-scale photovoltaic systems based on electric data and thermal imaging of the PV fields. Renew. Sustain. Energy Rev. 2024, 206, 114858. [Google Scholar] [CrossRef]
  18. Sinap, V.; Kumtepe, A. CNN-based automatic detection of photovoltaic solar module anomalies in infrared images: A comparative study. Neural Comput. Appl. 2024, 36, 17715–17736. [Google Scholar] [CrossRef]
  19. Starzyński, J.; Zawadzki, P.; Harańczyk, D. Machine Learning in Solar Plants Inspection Automation. Energies 2022, 15, 5966. [Google Scholar] [CrossRef]
  20. Rodriguez-Vazquez, J.; Prieto-Centeno, I.; Fernandez-Cortizas, M.; Perez-Saura, D.; Molina, M.; Campoy, P. Real-Time Object Detection for Autonomous Solar Farm Inspection via UAVs. Sensors 2024, 24, 777. [Google Scholar] [CrossRef]
  21. Meribout, M.; Tiwari, V.K.; Herrera, J.P.P.; Baobaid, A.N.M.A. Solar panel inspection techniques and prospects. Measurement 2023, 209, 112466. [Google Scholar] [CrossRef]
  22. Al-Zaabi, F.H.K.; Al Washahi, A.M.S.; Al-Maaini, R.K.M.; Boddu, M.K. Advancing Solar PV Component Inspection: Early Defect Detection with UAV Based Thermal Imaging and Machine Learning. In Proceedings of the 2023 Middle East and North Africa Solar Conference (MENA-SC), Dubai, United Arab Emirates, 15–18 November 2023; pp. 1–3. [Google Scholar] [CrossRef]
  23. Hwang, H.P.-C.; Ku, C.C.-Y.; Chan, J.C.-C. Detection of Malfunctioning Photovoltaic Modules Based on Machine Learning Algorithms. IEEE Access 2021, 9, 37210–37219. [Google Scholar] [CrossRef]
  24. Dhimish, M.; Tyrrell, A.M. Optical Filter Design for Daylight Outdoor Electroluminescence Imaging of PV Modules. Photonics 2024, 11, 63. [Google Scholar] [CrossRef]
  25. Ishikawa, Y. Outdoor evaluation of photovoltaic modules using electroluminescence method. JSAP Rev. 2022, 2022, 220412. [Google Scholar] [CrossRef]
  26. Redondo-Plaza, A.; Zorita-Lamadrid, Á.L.; Alonso-Gómez, V.; Hernández-Callejo, L. Inspection techniques in photovoltaic power plants: A review of electroluminescence and photoluminescence imaging. Renew. Energies 2024, 2, 27533735241282603. [Google Scholar] [CrossRef]
  27. Guada, M.; Moretón, Á.; Rodríguez-Conde, S.; Sánchez, L.A.; Martínez, M.; Miguel González, Á.; Jiménez, J.; Pérez, L.; Parra, V.; Martínez, O. Daylight luminescence system for silicon solar panels based on a bias switching method. Energy Sci. Eng. 2020, 8, 3839–3853. [Google Scholar] [CrossRef]
  28. Santamaría, R.D.P.; Dhimish, M.; Benatto, G.A.D.R.; Kari, T.; Poulsen, P.B.; Spataru, S.V. From Indoor Electroluminescence to Outdoor Daylight Electroluminescence Imaging: A Comprehensive Review of Techniques, Advances, and AI-Driven Perspectives. Preprints 2025, 16, 2025031168. [Google Scholar] [CrossRef]
  29. Khadka, N.; Bista, A.; Adhikari, B.; Shrestha, A.; Bista, D.; Adhikary, B. Current Practices of Solar Photovoltaic Panel Cleaning System and Future Prospects of Machine Learning Implementation. IEEE Access 2020, 8, 135948–135962. [Google Scholar] [CrossRef]
  30. Rahman, M.R.; Tabassum, S.; Haque, E.; Nishat, M.M.; Faisal, F.; Hossain, E. CNN-based Deep Learning Approach for Micro-crack Detection of Solar Panels. In Proceedings of the 2021 3rd International Conference on Sustainable Technologies for Industry 4.0 (STI), Dhaka, Bangladesh, 18–19 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  31. Hassan, S.; Dhimish, M. A Survey of CNN-Based Approaches for Crack Detection in Solar PV Modules: Current Trends and Future Directions. Solar 2023, 3, 663–683. [Google Scholar] [CrossRef]
  32. Waqar Akram, M.; Li, G.; Jin, Y.; Chen, X. Failures of Photovoltaic modules and their Detection: A Review. Appl. Energy 2022, 313, 118822. [Google Scholar] [CrossRef]
  33. Lins, S.; Pandl, K.D.; Teigeler, H.; Thiebes, S.; Bayer, C.; Sunyaev, A. Artificial Intelligence as a Service. Bus. Inf. Syst. Eng. 2021, 63, 441–456. [Google Scholar] [CrossRef]
  34. Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 55, 3503–3568. [Google Scholar] [CrossRef]
  35. Fu, G.; Le, W.; Zhang, Z.; Li, J.; Zhu, Q.; Niu, F.; Chen, H.; Sun, F.; Shen, Y. A Surface Defect Inspection Model via Rich Feature Extraction and Residual-Based Progressive Integration CNN. Machines 2023, 11, 124. [Google Scholar] [CrossRef]
  36. Krichen, M. Convolutional Neural Networks: A Survey. Computers 2023, 12, 151. [Google Scholar] [CrossRef]
  37. Taye, M.M. Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions. Computation 2023, 11, 52. [Google Scholar] [CrossRef]
  38. Liu, Y.; Zhang, C.; Dong, X. A survey of real-time surface defect inspection methods based on deep learning. Artif. Intell. Rev. 2023, 56, 12131–12170. [Google Scholar] [CrossRef]
  39. Jha, S.B.; Babiceanu, R.F. Deep CNN-based visual defect detection: Survey of current literature. Comput. Ind. 2023, 148, 103911. [Google Scholar] [CrossRef]
  40. Tang, W.; Yang, Q.; Dai, Z.; Yan, W. Module defect detection and diagnosis for intelligent maintenance of solar photovoltaic plants: Techniques, systems and perspectives. Energy 2024, 297, 131222. [Google Scholar] [CrossRef]
  41. Zheng, X.; Zheng, S.; Kong, Y.; Chen, J. Recent advances in surface defect inspection of industrial products using deep learning techniques. Int. J. Adv. Manuf. Technol. 2021, 113, 35–58. [Google Scholar] [CrossRef]
  42. Diethelm, M.; Penninck, L.; Regnat, M.; Offermans, T. Finite Element Modeling for Analysis of Electroluminescence and Infrared Images of Thin-Film Solar Cells. Sol. Energy 2020, 197, 408–416. [Google Scholar] [CrossRef]
  43. Gao, M.; Xie, Y.; Song, P.; Qian, J.; Sun, X.; Liu, J. A Definition Rule for Defect Classification and Grading of Solar Cells Photoluminescence Feature Images and Estimation of CNN-Based Automatic Defect Detection Method. Crystals 2023, 13, 819. [Google Scholar] [CrossRef]
  44. Rachid, Z.; Nadir, F.A. Hybrid CNN-SVM Model for High-Accuracy Defect Detection in PV Modules Using Infrared Images. Alger. J. Renew. Energy Sustain. Dev. 2024, 6, 44–52. [Google Scholar]
  45. Al-Otum, H.M. Deep learning-based automated defect classification in Electroluminescence images of solar panels. Adv. Eng. Inform. 2023, 58, 102147. [Google Scholar] [CrossRef]
  46. Puranik, V.E.; Kumar, R.; Gupta, R. Progress in module level quantitative electroluminescence imaging of crystalline silicon PV module: A review. Solar Energy 2024, 264, 111994. [Google Scholar] [CrossRef]
  47. Ory, D.; Paul, N.; Lombez, L. Extended Quantitative Characterization of Solar Cell from Calibrated Voltage-Dependent Electroluminescence Imaging. J. Appl. Phys. 2021, 129, 043106. [Google Scholar] [CrossRef]
  48. Jia, Y.; Wang, Y.; Hu, X.; Xu, J.; Chen, S. Diagnosing Breakdown Mechanisms in Monocrystalline Silicon Solar Cells via Electroluminescence Imaging. Sol. Energy 2021, 221, 300–309. [Google Scholar] [CrossRef]
  49. Chen, X.; Karin, T.; Jain, A. Automated Defect Identification in Electroluminescence Images of Solar Modules. Sol. Energy 2022, 239, 380–392. [Google Scholar] [CrossRef]
  50. Zebari, D.A.; Al-Waisy, A.S.; Ibrahim, D.A. Identifying Defective Solar Cells in Electroluminescence Images Using Deep Feature Representations. PeerJ Comput. Sci. 2022, 8, e992. [Google Scholar] [CrossRef]
  51. Zafirovska, I. Line Scan Photoluminescence and Electroluminescence Imaging of Silicon Solar Cells and Modules. Ph.D. Thesis, University of New South Wales, Sydney, NSW, Australia, 2019. Available online: https://www.researchgate.net/publication/339676573_Line_scan_photoluminescence_and_electroluminescence_imaging_of_silicon_solar_cells_and_modules (accessed on 11 March 2025).
  52. Planes, E.; Spalla, M.; Juillard, S.; Perrin, L. Absolute Quantification of Photo-/Electroluminescence Imaging for Solar Cells: Definition and Application to Organic and Perovskite Devices. ACS Appl. Electron. Mater. 2019, 1, 1372–1379. [Google Scholar] [CrossRef]
  53. Rajput, A.S.; Zhang, Y.; Ho, J.W.; Aberle, A.G. Comparative Study of the Electrical Parameters of Individual Solar Cells in a c-Si Module Extracted Using Indoor and Outdoor Electroluminescence Imaging. IEEE J. Photovolt. 2020, 10, 1125–1132. [Google Scholar] [CrossRef]
  54. Xi, X.; Sun, Q.; Shao, J.; Liu, G.; Yang, G.; Zhu, B. A real-time monitoring method of potential-induced degradation shunts for crystalline silicon solar cells. J. Renew. Sustain. Energy 2025, 17, 013502. [Google Scholar] [CrossRef]
  55. Mateo Romero, H.F. Employing Artificial Intelligence Techniques for the Estimation of Energy Production in Photovoltaic Solar Cells Based on Electroluminescence Images. Ph.D. Thesis, Universidad de Valladolid, Valladolid, Spain, 2024. Available online: http://uvadoc.uva.es/handle/10324/71657 (accessed on 11 March 2025).
  56. Colvin, D.J.; Schneller, E.J.; Davis, K.O. Cell Dark Current–Voltage from Non-Calibrated Module Electroluminescence Image Analysis. Sol. Energy 2022, 250, 1204–1214. [Google Scholar] [CrossRef]
  57. Daher, D.H.; Mathieu, A.; Abdallah, A.; Mouhoumed, D.; Logerais, P.-O.; Gaillard, L.; Ménézo, C. Leon Gaillard and Christophe Ménézo Photovoltaic failure diagnosis using imaging techniques and electrical characterization. EPJ Photovolt. 2024, 15, 25. [Google Scholar] [CrossRef]
  58. Barraz, Z.; Sebari, I.; Ait El Kadi, K.; Ait Abdelmoula, I. Towards a Holistic Approach for UAV-Based Large-Scale Photovoltaic Inspection: A Review on Deep Learning and Image Processing Techniques. Technologies 2025, 13, 117. [Google Scholar] [CrossRef]
  59. Ghahremani, A.; Adams, S.D.; Norton, M.; Khoo, S.Y.; Kouzani, A.Z. Advancements in AI-Driven detection and localisation of solar panel defects. Adv. Eng. Inform. 2025, 64, 103104. [Google Scholar] [CrossRef]
  60. Jalal, M.; Khalil, I.U.; Haq, A.U. Deep learning approaches for visual faults diagnosis of photovoltaic systems: State-of-the-Art review. Results Eng. 2024, 23, 102622. [Google Scholar] [CrossRef]
  61. Balachandran, G.B.; Devisridhivyadharshini, M.; Ramachandran, M.E.; Santhiya, R. Comparative investigation of imaging techniques, pre-processing and visual fault diagnosis using artificial intelligence models for solar photovoltaic system—A comprehensive review. Measurement 2024, 232, 114683. [Google Scholar] [CrossRef]
  62. Michail, A.; Livera, A.; Tziolis, G.; Carús Candás, J.L.; Fernandez, A.; Antuña Yudego, E.; Fernández Martínez, D.; Antonopoulos, A.; Tripolitsiotis, A.; Partsinevelos, P.; et al. A comprehensive review of unmanned aerial vehicle-based approaches to support photovoltaic plant diagnosis. Heliyon 2024, 10, e23983. [Google Scholar] [CrossRef]
  63. Zaman, E.E.; Khanam, R. PV-faultNet: Optimized CNN Architecture to detect defects resulting efficient PV production. arXiv 2024. [Google Scholar] [CrossRef]
  64. Lin, K.-M.; Lin, H.-H.; Lin, Y.-T. Development of a CNN-based hierarchical inspection system for detecting defects on electroluminescence images of single-crystal silicon photovoltaic modules. Mater. Today Commun. 2022, 31, 103796. [Google Scholar] [CrossRef]
  65. Karakan, A. Detection of Defective Solar Panel Cells in Electroluminescence Images with Deep Learning. Sustainability 2025, 17, 1141. [Google Scholar] [CrossRef]
  66. Qureshi, U.R.; Rashid, A.; Altini, N.; Bevilacqua, V.; La Scala, M. Explainable Intelligent Inspection of Solar Photovoltaic Systems with Deep Transfer Learning: Considering Warmer Weather Effects Using Aerial Radiometric Infrared Thermography. Electronics 2025, 14, 755. [Google Scholar] [CrossRef]
  67. Samrouth, K.; Nazir, S.; Bakir, N.; Khodor, N. Dual Cnn for Photovoltaic Electroluminescence Images Microcrack Detection. Available online: https://ssrn.com/abstract=5015797 (accessed on 20 March 2025).
  68. Noura, H.N.; Chahine, K.; Bassil, J.; Chaaya, J.A.; Salman, O. Efficient combination of deep learning models for solar panel damage and soiling detection. Measurement 2025, 251, 117185. [Google Scholar] [CrossRef]
  69. Li, W.; Wang, F.; Sun, Z. Semantic segmentation method of photovoltaic cell microcracks based on EL polarization imaging. Sol. Energy 2025, 291, 113364. [Google Scholar] [CrossRef]
  70. Binomairah, A.; Abdullah, A.; Khoo, B.E.; Mahdavipour, Z.; Teo, T.W.; Noor, N.S.M.; Abdullah, M.Z. Detection of microcracks and dark spots in monocrystalline PERC cells using photoluminescene imaging and YOLO-based CNN with spatial pyramid pooling. EPJ Photovolt. 2022, 13, 27. [Google Scholar] [CrossRef]
  71. Shihavuddin, A.S.M.; Rashid, M.R.A.; Maruf, H.; Hasan, M.A.; Ul Haq, M.A.; Ashique, R.H.; Al Mansur, A. Image based surface damage detection of renewable energy installations using a unified deep learning approach. Energy Rep. 2021, 7, 4566–4576. [Google Scholar] [CrossRef]
  72. Ding, S.; Jing, W.; Chen, H.; Chen, C. Yolo Based Defects Detection Algorithm for EL in PV Modules with Focal and Efficient IoU Loss. Appl. Sci. 2024, 14, 7493. [Google Scholar] [CrossRef]
  73. Wang, N.; Huang, S.; Liu, X.; Wang, Z.; Liu, Y.; Gao, Z. MRA-YOLOv8: A Network Enhancing Feature Extraction Ability for Photovoltaic Cell Defects. Sensors 2025, 25, 1542. [Google Scholar] [CrossRef]
  74. Lin, H.-H.; Dandage, H.K.; Lin, K.-M.; Lin, Y.-T.; Chen, Y.-J. Efficient Cell Segmentation from Electroluminescent Images of Single-Crystalline Silicon Photovoltaic Modules and Cell-Based Defect Identification Using Deep Learning with Pseudo-Colorization. Sensors 2021, 21, 4292. [Google Scholar] [CrossRef]
  75. Chen, S.; Lu, Y.; Qin, G.; Hou, X. Polycrystalline silicon photovoltaic cell defects detection based on global context information and multi-scale feature fusion in electroluminescence images. Mater. Today Commun. 2024, 41, 110627. [Google Scholar] [CrossRef]
  76. Lang, D.; Lv, Z. A PV cell defect detector combined with transformer and attention mechanism. Sci. Rep. 2024, 14, 20671. [Google Scholar] [CrossRef]
  77. Gao, Y.; Pang, C.; Zeng, X.; Jiang, P. A Single-Stage Photovoltaic Module Defect Detection Method Based on Optimized YOLOv8. IEEE Access 2025, 13, 27805–27817. [Google Scholar] [CrossRef]
  78. Demirci, M.Y.; Beşli, N.; Gümüşçü, A. An improved hybrid solar cell defect detection approach using Generative Adversarial Networks and weighted classification. Expert Syst. Appl. 2024, 252, 124230. [Google Scholar] [CrossRef]
  79. da Silveira Junior, C.R.; Sousa, C.E.R.; Fonseca Alves, R.H. Automatic Fault Classification in Photovoltaic Modules Using Denoising Diffusion Probabilistic Model, Generative Adversarial Networks, and Convolutional Neural Networks. Energies 2025, 18, 776. [Google Scholar] [CrossRef]
  80. Parikh, H.R.; Buratti, Y.; Spataru, S.; Villebro, F.; Reis Benatto, G.A.D.; Poulsen, P.B.; Wendlandt, S.; Kerekes, T.; Sera, D.; Hameiri, Z. Solar Cell Cracks and Finger Failure Detection Using Statistical Parameters of Electroluminescence Images and Machine Learning. Appl. Sci. 2020, 10, 8834. [Google Scholar] [CrossRef]
  81. Chen, H.; Pang, Y.; Hu, Q.; Liu, K. Solar cell surface defect inspection based on multispectral convolutional neural network. J. Intell. Manuf. 2020, 31, 453–468. [Google Scholar] [CrossRef]
  82. Al-Otum, H.M. Classification of anomalies in electroluminescence images of solar PV modules using CNN-based deep learning. Sol. Energy 2024, 278, 112803. [Google Scholar] [CrossRef]
  83. Lo, C.-M.; Lin, T.-Y. Automated optical inspection based on synthetic mechanisms combining deep learning and machine learning. J. Intell. Manuf. 2024. [Google Scholar] [CrossRef]
  84. Lian, J.; Wang, L.; Liu, T.; Ding, X.; Yu, Z. Automatic visual inspection for printed circuit board via novel Mask R-CNN in smart city applications. Sustain. Energy Technol. Assess. 2021, 44, 1010320. [Google Scholar] [CrossRef]
  85. Moon, I.Y.; Lee, H.W.; Kim, S.-J.; Oh, Y.-S.; Jung, J.; Kang, S.-H. Analysis of the Region of Interest According to CNN Structure in Hierarchical Pattern Surface Inspection Using CAM. Materials 2021, 14, 2095. [Google Scholar] [CrossRef]
  86. Balzategui, J.; Eciolaza, L.; Maestro-Watson, D. Anomaly Detection and Automatic Labeling for Solar Cell Quality Inspection Based on Generative Adversarial Network. Sensors 2021, 21, 4361. [Google Scholar] [CrossRef]
  87. Bharadiya, J.P. A Comparative Study of Business Intelligence and Artificial Intelligence with Big Data Analytics. Am. J. Artif. Intell. 2023, 7, 24–30. [Google Scholar] [CrossRef]
  88. Laot, E.; Puel, J.-B.; Guillemoles, J.-F.; Ory, D. Physics-Based Machine Learning Electroluminescence Models for Fast yet Accurate Solar Cell Characterization. Prog. Photovolt. Res. Appl. 2025. [Google Scholar] [CrossRef]
  89. Chen, X.; Zhang, L.; Chen, X.; Cen, Y.; Zhang, L.; Zhang, F. A lightweight multiscale feature fusion network for solar cell defect detection. Comput. Mater. Contin. 2025, 82, 521–542. [Google Scholar] [CrossRef]
  90. Demir, F. Enhancing Defect Classification in Solar Panels with Electroluminescence Imaging and Advanced Machine Learning Strategies. IEEE Access 2025, 13, 58481–58495. [Google Scholar] [CrossRef]
  91. Thakfan, A.; Bin Salamah, Y. Development and Performance Evaluation of a Hybrid AI-Based Method for Defects Detection in Photovoltaic Systems. Energies 2025, 18, 812. [Google Scholar] [CrossRef]
  92. Tang, C.; Sun, D.; Zou, J.; Xiong, Y.; Fang, G.; Zhang, W. Lay-up defects inspection for automated fiber placement with structural light scanning and deep learning. Polym. Compos. 2025, 1–11. [Google Scholar] [CrossRef]
  93. Yousif, H.; Al-Milaji, Z. Fault detection from PV images using hybrid deep learning model. Sol. Energy 2024, 267, 112207. [Google Scholar] [CrossRef]
  94. İmak, A. Automatic Classification of Defective Photovoltaic Module Cells Based on a Novel CNN-PCA-SVM Deep Hybrid Model in Electroluminescence Images. Turk. J. Sci. Technol. 2024, 19, 497–508. [Google Scholar] [CrossRef]
  95. Gopalakrishnan, S.; Wahab, N.I.A.; Veerasamy, V.; Hizam, H.; Farade, R.A. NASNet-LSTM based Deep learning Classifier for Anomaly Detection in Solar Photovoltaic Modules. J. Phys. Conf. Ser. 2024, 2777, 012006. [Google Scholar] [CrossRef]
  96. Sridharan, N.V.; Vaithiyanathan, S.; Aghaei, M. Voting based ensemble for detecting visual faults in photovoltaic modules using AlexNet features. Energy Rep. 2024, 11, 3889–3901. [Google Scholar] [CrossRef]
  97. Ledmaoui, Y.; El Maghraoui, A.; El Aroussi, M.; Saadane, R. Enhanced Fault Detection in Photovoltaic Panels Using CNN-Based Classification with PyQt5 Implementation. Sensors 2024, 24, 7407. [Google Scholar] [CrossRef]
  98. Hassan, S.; Dhimish, M. Enhancing solar photovoltaic modules quality assurance through convolutional neural network-aided automated defect detection. Renew. Energy 2023, 219, 119389. [Google Scholar] [CrossRef]
Figure 1. The schematic view of the EL test.
Figure 1. The schematic view of the EL test.
Applsci 15 05511 g001
Figure 2. This figure shows some common types of polycrystalline silicon solar cell defects: (a) crack, (b) fracture, (c) soldering failure, (d) hotspot, (e) finger interruption, (f) tabbing disconnection, (g) material defect, (h) edge defect (contamination during the silicon ingot growth), (i) corrosion, (j) PID phenomenon, (k) black core, and (l) backsheet scratch.
Figure 2. This figure shows some common types of polycrystalline silicon solar cell defects: (a) crack, (b) fracture, (c) soldering failure, (d) hotspot, (e) finger interruption, (f) tabbing disconnection, (g) material defect, (h) edge defect (contamination during the silicon ingot growth), (i) corrosion, (j) PID phenomenon, (k) black core, and (l) backsheet scratch.
Applsci 15 05511 g002
Table 1. List of research publications examining EL images with CNN in 2025.
Table 1. List of research publications examining EL images with CNN in 2025.
AuthorReferenceDateImaging
Technology
ModelModel
Application
Damage
Type
Karakan[65]2025ELAlexNet, GoogleNet, MobileNet, VGG16, ResNet50, DenseNet121, SqueezeNetclassification (intact, cracked,
broken)
crack, fracture
Wang et al.[73]2025ELMRA-YOLOv8 (MBCANet, ResBlock, AMPDIoU)detectioncrack, broken grid, spots
Laot et al.[88]2025ELMLP, modified
U-NET (mU-NET)
regression
(local Rs and J0 estimate)
dislocation, damaged tab (fingers)
Chen et al.[89]2025ELLMFF
(Lightweight Multiscale Feature Fusion Network)
segmentation (precise
localization of failures)
crack, black spots, broken grid
Demir[90]2025ELCNN + RSWSclassification (2 and 4
classes)
microcrack, fracture, tab interruption
Li et al.[69]2025ELimproved
U-Net
segmentation (microcrack)microcrack
Table 2. List of research publications examining other imaging technologies with CNN in 2025.
Table 2. List of research publications examining other imaging technologies with CNN in 2025.
AuthorReferenceDateImaging
Technology
ModelModel
Application
Damage Type
da Silveira Junior et al.[79]2025IRCNN + GAN, DDPMclassification (11 classes)contamination,
shading, diode
Qureshi et al.[66]2025IR
(radiometric heat map)
MobileNetV2, InceptionV3, VGG16, CNN-ensemble (DTL models)multi-classification, diagnosticshotspot, heated junction box, substring, multistring, patchwork
Thakfan[91]2025IR, I-V curvesML + transfer learningfailure detection and diagnosticssurface and performance defects
Tang et al.[92]2025point cloud
3D
PointNet++segmentation (lay-up failures)wrinkle, bridge, gap, overlap
Noura et al.[68]2025RGBEfficientNet, ViT, YOLOv5, VGG19, ResNet50, Swin Transformer, MobileNet, ConvNext, NASNetclassification (faulty/intact, contamination/damage types)dust, bird droppings, snow,
physical and electrical
injuries
Gao et al.[77]2025RGBYOLOv8 (PSA-det)detectionscratches, broken grid, discoloration
Table 3. List of research publications examining EL images with CNN in 2024.
Table 3. List of research publications examining EL images with CNN in 2024.
AuthorReferenceDatePictureModelModel
Application
Damage
Type
Zaman et al.[63]2024ELCustom CNNclassificationcommon failures
Samrouth et al.[67]2024ELDual CNN (shallow + deep), VGG19, AlexNetdetection (binary)microcrack
Ding et al.[72]2024ELYOLOv5 (m, l, x, s) + Cascadedetection, classification12 types: crack, dislocation, etc.
Chen et al.[75]2024ELYOLOv8 + Attentionclassification, detectioncorner, scratch, printing error etc.
Lang[76]2024ELYOLOv8 + Transformer + PSA attentionfailure detectioncrack, fracture, shading, spots
Demirci et al.[78]2024ELGAN, VGG-16, CNNclassificationmicrocracks, broken cells, finger interruptions
Al-Otum[82]2024ELLwNet, SqueezeNet, GoogleNetclassification
(4 and 8
classes)
crack, microcrack, break, finger interruption, disconnected cell, diode failure, soldering defect
Yousif[93]2024ELCNN + HoG (hybrid
model)
classification (faulty/intact)crack, PID, dark spots
İmak[94]2024ELCNN + PCA + SVM (MobileNetV2, DenseNet201, InceptionV3)classification (faulty/intact)crack, contamination, shadow, manufacturing defect
Table 4. List of research publications examining other imaging technologies with CNN in 2024.
Table 4. List of research publications examining other imaging technologies with CNN in 2024.
AuthorReferenceDatePictureModelModel
Application
Damage
Type
Sinap and Kumtepe[18]2024IRcustom CNNanomaly detection, classification (12 classes)cracking, hotspot, shadowing, diode fault, soiling, vegetation, offline module, etc.
Gopalakrishnan et al.[95]2024IRNASNet + LSTManomaly detection, classification (12 classes)hotspot, cracking, shadowing, soiling, offline, vegetation, etc.
Zaghdoudi[44]2024IRCNN + SVM (hybrid)
VGG, ResNet, ViT
classificationhotspot, cracking, shadowing, soiling, diode fault, offline module, etc. (12 classes)
Sridharan et al.[96]2024RGB (UAV)AlexNet + ensemble (SVM, KNN, J48)classification (visual failures)snail trail, fractures, discoloration, burn marks
Rodriguez-Vazquez et al.[20]2024RGBCenterNet-based keypoint detectionreal-time solar panel detection with UAVpanel level localization
Ledmaoui et al.[97]2024RGBCNN (VGG16 based)anomaly classification, failure detectiondust, dirt, bird droppings, shading
Table 5. List of research results with various models in 2025 along with published metrics.
Table 5. List of research results with various models in 2025 along with published metrics.
AuthorReferenceDateAccuracyPrecisionRecallF1-ScoreIoU/mAP
Karakan[65]202597.82% (mono), 96.29% (poli)N/AN/AN/AN/A
Wang et al.[73]2025N/AN/AN/AN/AmAP50: 91.7% (PVEL-AD), 69.3% (SPDI)
Laot et al.[88]2025≈99.99% (MLP),
≈99.12%
(mU-NET)
N/AN/AN/AN/A
Chen et al.[89]2025N/AN/AN/A81.3%, 67.5%, 96.2%IoU: 68.5%, 51.0%, 92.7%
Demir[90]202598.17%
(2 classes), 97.02%
(4 classes)
N/AN/AN/AN/A
Li et al.[69]2025N/AN/AN/AN/Abetter than other networks, without specific value
da Silveira Junior et al.[79]202589.83% (DDPM), 86.98% (GAN)N/AN/AN/AN/A
Qureshi et al.[66]2025CNN-
ensemble: 100%,
MobileNetV2: 99.8%
N/AN/ACNN-ensemble: 1.000N/A
Thakfan[91]2025>98%N/A>98%N/AN/A
Tang et al.[92]2025N/AN/AN/AN/AIoU: >72%
Noura et al.[68]202596.3%, 91.8%N/AN/A87% (YOLOv5s)IoU = 95% (UNet + ASPP)
Gao et al.[77]2025N/AN/AN/AN/AmAP50: 87.2%
Table 6. List of research results with various models in 2024 along with published metrics.
Table 6. List of research results with various models in 2024 along with published metrics.
AuthorReferenceDateAccuracyPrecisionRecallF1N/AscoreIoU/mAP
Zaman et al.[63]202491.67% (val)0.910.890.9N/A
Samrouth et al.[67]2024N/AN/AN/AN/Aqualitative comparison only
Ding et al.[72]2024N/AN/AN/AN/AmAP: 85.7% (YOLOv5m), 86.5% (YOLOv5s), 86.7% (YOLOv5x)
Chen et al.[75]2024N/AN/AN/A0.697mAP50: 77.9%, mAP50:95: 49.6%
Lang[76]2024N/AN/AN/AN/AmAP50: 77.9%
Demirci et al.[78]202494.11%%94.7%96.7%95.7%N/A
Al-Otum[82]2024LwNet: 96.2%, SqueezeNet: 93.95%, GoogleNet: 94.6%LwNet: 95.2%LwNet: 94.8%LwNet: 95.0%n.a.
Yousif[93]2024Better than 6 previous modelsN/AN/AN/AN/A
İmak[94]202492.19%0.920.90.91N/A
Sinap and Kumtepe[18]2024detection: 92%, fault classification: 82%N/AN/AN/AN/A
Gopalakrishnan et al.[95]202484.75%N/AN/AN/AN/A
Zaghdoudi[44]202492.37%N/AN/AN/AN/A
Sridharan et al.[96]202498.30% (2 classes)N/AN/AN/AN/A
Rodriguez-Vazquez et al.[20]2024N/AN/AN/AN/AIoU (keypoint detection): 85.3%
Ledmaoui et al.[97]202491.46%N/AN/A91.67%Specificity: 98.29%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matusz-Kalász, D.; Bodnár, I.; Jobbágy, M. An Overview of CNN-Based Image Analysis in Solar Cells, Photovoltaic Modules, and Power Plants. Appl. Sci. 2025, 15, 5511. https://doi.org/10.3390/app15105511

AMA Style

Matusz-Kalász D, Bodnár I, Jobbágy M. An Overview of CNN-Based Image Analysis in Solar Cells, Photovoltaic Modules, and Power Plants. Applied Sciences. 2025; 15(10):5511. https://doi.org/10.3390/app15105511

Chicago/Turabian Style

Matusz-Kalász, Dávid, István Bodnár, and Marcell Jobbágy. 2025. "An Overview of CNN-Based Image Analysis in Solar Cells, Photovoltaic Modules, and Power Plants" Applied Sciences 15, no. 10: 5511. https://doi.org/10.3390/app15105511

APA Style

Matusz-Kalász, D., Bodnár, I., & Jobbágy, M. (2025). An Overview of CNN-Based Image Analysis in Solar Cells, Photovoltaic Modules, and Power Plants. Applied Sciences, 15(10), 5511. https://doi.org/10.3390/app15105511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop