Next Article in Journal
Mechanical Behavior of Grouted Sleeve Butt and Lap Joints with Anti-Deflection Measures Under Uniaxial Tension and High-Stress Cyclic Loading
Next Article in Special Issue
Large Language and Foundation Models for Machinery Health Monitoring: A Systematic Review
Previous Article in Journal
Brazilian Microalgae-Derived Bioactives: Antioxidant and Antibacterial Properties for Skin Care Application
Previous Article in Special Issue
AI-Based Health Monitoring for Class I Induction Motors in Data-Scarce Environments: From Synthetic Baseline Generation to Industrial Implementation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Defect-Aware RGB Representation and Resolution-Efficient Deep Learning for Photovoltaic Failure Detection in Electroluminescence Images

1
Faculty of Automatic Control, Electronics, and Computer Science, Silesian University of Technology, 16 Akademicka Street, 44-100 Gliwice, Poland
2
Key Laboratory for Special Fiber and Fiber Sensor of Hebei Province, School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 2148; https://doi.org/10.3390/app16042148
Submission received: 13 January 2026 / Revised: 9 February 2026 / Accepted: 10 February 2026 / Published: 23 February 2026
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)

Abstract

Electroluminescence (EL) imaging is widely used for non-destructive inspection of photovoltaic (PV) cells; however, the low contrast of grayscale EL images limits the performance of automated defect detection methods. This manuscript proposes a defect-aware EL image classification framework that enhances defect visibility through local contrast enhancement and physically motivated RGB false-color mapping. Instead of simple channel replication, grayscale intensities are segmented into defect-related ranges and encoded to emphasize cracks, inactive regions, healthy silicon emission, and conductive pathways. The approach is evaluated on the public ELPV benchmark dataset proposing ResNet–50, EfficientNet–B0, and EfficientNet–B3 architectures at two input resolutions. The proposed representation consistently improves defect discrimination and achieves a maximum classification accuracy, outperforming previously reported CNN-based results on the same dataset. Notably, comparable accuracy is obtained at lower resolution, significantly reducing computational cost and inference time, which supports deployment with cheaper sensors and faster inspection pipelines. Class imbalance is addressed using focal loss, class weighting, and threshold calibration without artificial resampling, preserving realistic operating conditions. The results confirm that combining defect-aware RGB representation with resolution-efficient learning provides an accurate and computationally practical solution for EL-based PV defect detection.

1. Introduction

The global transition toward low-carbon energy systems has positioned photovoltaic (PV) technology as a central driver of renewable electricity generation. However, the long-term reliability and performance of PV modules are significantly hindered by latent structural and electrical defects that develop during manufacturing, installation, and field operation. Among these, microcracks remain one of the most critical degradation mechanisms, as they disrupt current flow, increase series resistance, and accelerate long-term power loss [1,2,3,4]. Other defect types, including finger interruptions, inactive areas, grain boundaries, and dislocation clusters, further degrade module performance by impairing charge transport pathways or reducing effective cell area [5,6,7]. A broad survey of PV degradation mechanisms shows that such defects contribute substantially to long-term performance decline, impacting both module reliability and lifecycle economics [8].
Electroluminescence (EL) imaging has emerged as one of the most effective non-destructive diagnostic tools. When a PV cell is forward biased, radiative recombination produces near-infrared emission whose spatial distribution reflects the internal crystal and metallization structure [9]. EL images therefore reveal microcracks, shunts, inactive regions, and metallization failures with far greater sensitivity than visual inspection or thermographic methods. Comparative studies confirm that EL imaging provides deeper defect insight than infrared thermography and electrical IV (current-voltage) testing, making it a key technology for quality control in both laboratory and industrial environments [10].
With the expansion of PV manufacturing and global deployment, manual interpretation of EL images has become increasingly impractical. This has motivated a surge in machine learning (ML) and deep learning (DL) techniques for automated defect detection. Early ML approaches relied on handcrafted features combined with classifiers such as logistic regression, random forests, and support vector machines. Deep convolutional neural networks (CNNs) have since demonstrated substantial improvements in detection accuracy, robustness, and feature generalization. Deitsch et al. [11] established the first widely used EL classification benchmark, and in a follow-up study [12], they introduced the ELPV dataset for supervised learning. More recent advances include CNN-based IR/EL fusion for module inspection [13], DL-based EL defect detection pipelines for industrial settings [14] and generalized deep-learning frameworks for cell-level classification [15,16]. Additional studies have explored defect identification using statistical EL parameters, segmentation-based architectures, and automatic crack detection algorithms [12,17]. Collectively, these efforts confirm the suitability of DL methods for real-time PV defect diagnosis.
Despite these advances, two critical aspects remain insufficiently explored. First, most DL pipelines treat EL images as grayscale inputs duplicated across RGB channels, ignoring the fact that defect visibility is strongly intensity-dependent and spatially heterogeneous. Such simple channel replication does not explicitly encode physical defect characteristics and may limit feature separability for fine defects such as microcracks. Second, the role of input image resolution has not been systematically analyzed. The majority of existing studies adopt a fixed input size—typically dictated by backbone compatibility or dataset preprocessing—without evaluating performance trends across multiple resolutions [18]. For example, several representative works resize EL images to a single resolution (e.g., 224 × 224 or 300 × 300) and report classification performance without discussing the impact of spatial resolution on defect detectability or computational efficiency [19]. Moreover, commonly used benchmark datasets, such as ELPV, are distributed at a fixed native resolution, which further encourages single-resolution evaluation protocols.
EL acquisition systems produce images with widely varying resolutions depending on sensor type, optics, and inspection distance; however, CNN architectures impose fixed input sizes (e.g., 224 × 224 or 300 × 300). Downsampling may suppress small defects, while higher resolutions substantially increase computational cost and inspection time. The trade-off between resolution, diagnostic accuracy, and computational efficiency therefore remains an open and practically relevant problem.
Building on our preliminary work published in [20], this paper presents an extended and refined framework for EL-based PV defect classification that jointly addresses these challenges. We introduce a defect-aware RGB representation that maps physically meaningful intensity ranges to color channels, enhancing the contrast between cracks, inactive regions, healthy conduction areas, and metallization features. This representation is combined with systematic resolution analysis proposing three representative CNN architectures—ResNet–50, EfficientNet–B0, and EfficientNet–B3—evaluated at two input resolutions. Using the ELPV benchmark dataset, we demonstrate that the proposed processing enables higher accuracy than previousely reported results, including outperforming the 88.42% accuracy acheived by Deitsch et al. [11] using a VGG19 regression model. Our extended pipeline, incorporating optimized preprocessing, augmentation, fine-tuning, and threshold selection, achieves up to 92.39% accuracy, offering new insights into resolution–capacity interactions and providing practical guidance for EL-based PV defect detection systems.
The remainder of this paper is organized as follows. Section 2 describes the dataset, class definition, and the defect categories considered in this study, and summarizes the main challenges of EL-based inspection. Section 3 presents the proposed preprocessing pipeline, including local contrast enhancement and the defect-aware RGB representation, and outlines the deep learning models used for classification. Section 4 reports the simulation results, including the quantitative validation of the RGB representation using a lightweight CNN, the performance comparison of deeper transfer-learning models, and the analysis of computational cost under different input resolutions. Finally, Section 5 concludes the paper with a summary of findings, limitations, and future research directions.
The objectives of this study are as follows:
  • Develop a physically interpretable defect-aware RGB representation of EL images that improves defect-feature separability compared to baseline grayscale-to-RGB mappings.
  • Quantify the impact of this representation on defect classification performance using both lightweight and deeper CNN architectures.
  • Analyze the trade-off between input resolution, classification performance, and computational cost under the constraints of the benchmark dataset.

2. Defect Representation in EL Imaging

The dataset used in this study is derived from a publicly available benchmark specifically prepared for photovoltaic (PV) defect detection research [11]. It contains 2624 EL images, all acquired under controlled laboratory conditions from crystalline silicon solar cells. Each image is provided in grayscale format with a native resolution of 300 × 300 pixels, ensuring uniform spatial dimensions across the dataset, including microcracks, inactive regions, broken fingers, grain-boundary darkening, and dislocation clusters. Each EL image corresponds to a single solar cell and is accompanied by a defect probability value, as defined in the benchmark dataset creation procedure [11].

2.1. Image Characteristics

Each raw sample is presented as a single-channel grayscale image with a special resolution of X i R 300 × 300 , where X i denotes the i - th image and all images belong to
X i = X 1 , X 2 , , X N , N = 2624
Based on the defect probability, we transformed the dataset to a binary classification problem, where images with a defect probability equal to or greater than 0.5 were labeled as defective (label 1), while those below 0.5 were labeled as functional (label 0). Mathematically, the classification rule is defined as follows:
y i = 1 ,   i f   p i   0.5   ( d e f e c t i v e   c e l l ) 0 ,   i f   p i   < 0.5   ( f u n c t i o n a l   c e l l )
where p i   0 ,   1 denotes the defect probability assigned to the i - th image.
Generating the labeled dataset: D = X i , y i i = 1 N
This resulted in 1909 (72.7%) functional samples and 715 (27.3%) defective samples, reflecting a moderate class imbalance. The dataset therefore captures realistic variations in EL appearance and defect morphology and is widely recognized as a standard benchmark for evaluating automated PV defect detection algorithms.

2.2. Visual Defect Taxonomy in EL Imagery

Electroluminescence imaging reveals physical degradation mechanisms as variation in emitted photon intensity, where malfunctioning regions typically exhibit localized darkening.
Table 1 summarizes common defect types observed in electroluminescence (EL) images of crystalline silicon photovoltaic cells. The original database has only two labels for PV cells, functional and defective, even though clearly visible different damages are present. We have therefore decided to inspect image-by-image manually and assign a specific defect-type label to each defective cell. The defect categories and their characteristics were established, as illustrated in Table 2, based on two complementary sources:
  • Descriptions reported in the existing literature on EL-based PV diagnostics.
  • Qualitative inspection of the original ELPV dataset, which contains these defect patterns in the images although they are not explicitly annotated by defect type.
Microcracks typically appear as thin dark fracture-like lines in EL images and are commonly attributed to mechanical stress during handling, transport, or lamination, often leading to power loss and hotspot formation [1,3]. Inactive areas manifest as extended dark regions with little or no EL emission, indicating electrically disconnected or severely degraded regions of the cell and resulting in significant performance loss [21]. Finger breaks or interruptions are characterized by dark linear discontinuities along metallization fingers, caused by grid interruptions and associated with increased series resistance and local efficiency losses [22]. Dislocation clusters appear as localized dark granular structures and are typically linked to crystallographic defects and material impurities, leading to localized degradation [9].
Based on their radiative appearance and root causes, the principal defect categories considered in this work are summarized in Table 1 and illustrated in Figure 1.
Although different defect types are identified and analyzed to characterize the dataset and to support the physical interpretation of electroluminescence patterns, the classification task addressed in this study is formulated as a binary problem, distinguishing between functional and defective cells. In practice, photovoltaic cells often exhibit multiple defect types simultaneously, as reflected by the observed defect combinations in the dataset. For this work, all defect manifestations—whether isolated or combined—are grouped into a single faulty class, which is consistent with industrial inspection objectives where reliable fault detection is prioritized over fine-grained defect categorization. The defect-type analysis is therefore provided to justify the proposed defect-aware RGB representation and to enhance interpretability, rather than to introduce a multi-class or multi-label classification task. Extending the framework toward defect-type classification constitutes ongoing work and is outside the scope of the present study.

2.3. Challenges in Defect Visibility in EL Images

Despite the well-defined visual taxonomy of defects in EL images (Section 2.2), their reliable discrimination remains challenging due to intrinsic intensity and resolution constraints. In EL images, the observed grayscale intensity
X i p 0 ,   255
reflects spatial variations in radiative recombination. Highly emissive structures such as busbars and fingers generate consistently high-intensity responses,
X i μ ( X i ) ,
which dominate the dynamic range and may obscure low-intensity defect signature.
In contrast, defects such as microcracks and inactive regions often produce subtle local intensity reductions,
X i μ ( X i ) σ ( X i )
making them difficult to distinguish from normal texture variations in raw grayscale images.
Additionally, special non-uniformity across the cell prevents the use of a single global threshold for reliable separation of defective and non-defective regions. Even though both pixels may belong to the same areas under different local conditions.
Resolution reduction further impacts defect visibility. Down sampling from the original resolution H 0 × W 0 to H × W suppresses fine structural details:
X i ( H , W ) = D ( X i ( H 0 × W 0 ) )

3. Detection Procedure for Cells Based on EL

The proposed defect-classification strategy is fundamentally driven by a defect-aware image representation paradigm that tightly couples intensity-to-RGB transformation with resolution adaptation to simultaneously enhance defect discriminability and control computational cost. In contrast to conventional pipelines where color conversion and resizing are treated as auxiliary or purely technical preprocessing steps, this work explicitly elevates both operations to core design variables within the learning strategy. The methodology is therefore constructed to investigate how physically meaningful color encoding and resolution selection influence feature learning, classification robustness, and computational efficiency in EL-based photovoltaic inspection.
The complete strategy is organized into four stages: defect-oriented image transformation, dataset partitioning under realistic class imbalance, transfer learning-based model training, and performance evaluation across resolutions and architectures, as summarized in Figure 2. By embedding image representation and resolution control directly into the learning pipeline, the proposed methodology provides a structured and reproducible framework for analyzing the trade-offs between defect visibility, model capacity, and computational burden.
All preprocessing and simulations were implemented in Python 3.12.12 (Google Colab environment). The color enhancement pipeline, including CLAHE contrast enhancement and defect-aware RGB mapping, was implemented using the OpenCV library. The convolutional neural network (CNN) modeling, training, and evaluation were performed using the TensorFlow/Keras framework. This implementation environment ensures reproducibility and allows full control over the preprocessing and learning pipeline.

3.1. Defect-Oriented Image Preprocessing

Each EL image was originally provided as a single-channel grayscale matrix:
X i R H 0 × W 0 × 1   ,   H 0 = W 0 = 300  
which limits the visibility of subtle defect-related structures such as microcracks or lightly degraded regions. This is motivated by the intensity-based pseudo-color methods widely used in medical imaging to enhance anatomical interpretability (MRI and CT false-color rendering) [23,24,25]. Hence, a similar strategy is proposed for photovoltaic EL images to improve feature separation and spatial perception without changing structural information.
(a)
Initial grayscale-to-RGB transformation
As a first attempt, a direct grayscale-to-RGB pseudo-color transformation was applied by thresholding pixel intensities into a fixed number of color ranges. This approach was initially adopted due to its simplicity and widespread use in intensity-based visualization techniques. Visual inspection of the resulting EL images is illustrated in Figure 3.
After applying the baseline grayscale-to-RGB pseudo-color mapping, visual inspection revealed that metallic busbars were systematically highlighted using the same color range as severe defect regions. This occurs because busbars naturally exhibit high electroluminescence intensity due to their strong conductive properties. As a consequence, conductive but non-defective structures were visually encoded in a manner indistinguishable from true defect patterns.
This qualitative observation indicates that the baseline pseudo-color mapping introduces semantic ambiguity between conductive structures and actual defect regions, which is undesirable for any learning-based diagnostic system. This limitation motivated the development of a more physically meaningful, defect-aware representation strategy.
(b)
Defect-aware RGB mapping
To overcome this limitation, a defect-aware transformation was developed. The process begins with local contrast enhancement using Contrast Limited Adaptive Histogram Equalization (CLAHE), defined as
X i C L A H E = C L A H E ( X i ; α ,   τ )
where C ( ) denotes the CLAHE operator, α is the clip limit, and τ is the tile grid size.
In this work, the parameters were fixed to
α = 2.0 , τ = ( 8,8 ) ,
which partition the image into 8 × 8 non-overlapping tiles, apply histogram equalization independently within each tile, and clip the local histogram at α to limit noise amplification.
Lower clip limits α 1.5 yield insufficient contrast enhancement, resulting in a reduced separation between low- and high-intensity regions:
Δ I < 20   gray   levels ,
which suppresses fine-crack and low-emission defect signatures. Conversely, higher clip limits α 3.0 excessively amplify background noise and introduce artificial texture. The chosen configuration (α = 2.0, τ = 8 × 8) provides a stable compromise, enhancing local defect-related intensity variations while preserving homogeneous emission regions.
Next, rather than duplicating channels, pixel intensities were segmented into defect-related ranges:
t 1 = P 20 ( X i C L A H E ) ,           t 2 = P 40 ( X i C L A H E ) ,           t 3 = P 80 ( X i C L A H E )
where P k is the k - th percentile.
To support the selection of the percentile thresholds, several configurations were quantitatively evaluated using the same preprocessing pipeline (CLAHE, busbar/border exclusion, and percentile computation on the active area). Table 3 summarizes the resulting pixel distributions and inter-band separability.
The configuration 10–50–90 produced the highest inter-band intensity separation (32.18 gray levels) but allocated only 9.7% of pixels to the lowest-intensity band and 10.4% to the highest-intensity band, which risks underrepresenting thin cracks and conductive structures. Conversely, the symmetric configuration 25–50–75 allocated approximately 25% of pixels to each band but reduced the overall separability (23.03) and increased the likelihood of merging healthy silicon emission into the conductive band.
The proposed configuration 20–40–80 provides a balanced compromise: it preserves sufficient pixel coverage for low-intensity defect regions (19.4%) and high-intensity conductive pathways (20.7%), maintains a physically consistent dominance of healthy emission (40.0%), and achieves stable separation between intensity bands (25.40). The low standard deviation across samples further confirms the robustness of this partitioning. This quantitative analysis supports the selection of 20–40–80 as a principled and numerically stable choice for defect-aware RGB encoding.
Pixels are then assigned to color bands modeling different physical regions, as presented in Table 4.
Thus, the final false-color representation is as follows:
X i R G B p = c 1   p m 1 c 2   p m 2 c 3   p m 3 c 4   p m 4         R H 0 × W 0 × 1    
This step provides physically interpretable defect highlighting, enabling CNNs to better discriminate defect features, which improves the distinction between finger-related intensity and crack-related intensity. Figure 4 illustrates samples of defective PV cell EL images after applying the adopted awareness defect RGB mapping.
Visual inspection of the transformed EL images (Figure 4) confirms that busbars are now consistently grouped within the conductive pathway class, while crack structures and inactive regions remain distinctly emphasized.
A quantitative validation of the proposed representation using a lightweight CNN, whose architecture is illustrated in Figure 5, is presented in the Section 4 to objectively evaluate the discriminative contribution of the RGB mapping strategy.

3.2. Resolution Standardization

In this study, image resizing is introduced as a controlled experimental variable rather than a simple architectural constraint. The objective is not limited to satisfying the input requirements of convolutional neural networks (CNNs); it is also to systematically evaluate the trade-off between defect classification performance and computational cost under different spatial resolutions.
Based on our preliminary work published in [20] conducted on the same ELPV dataset using grayscale EL images, it was demonstrated that increasing image resolution does not necessarily lead to higher classification accuracy for machine learning and lightweight CNN models. In contrast, higher resolutions were shown to significantly increase training and inference time. This observation can be expressed as follows:
A r r 0 ,         T r r > 0 ,
    r 64 × 64 , 96 × 96 , 128 × 128 , 224 × 224 , 300 × 300
where r denotes the input resolution, A ( r ) the classification accuracy, T ( r ) and the computational time. These results indicate that accuracy saturates with increasing resolution, while computational cost grows monotonically.
Motivated by this finding, the present work extends the analysis to strong deep learning architectures and investigates whether this resolution–performance behavior remains valid when defect-aware RGB EL representations are employed. In addition, this analysis reflects practical deployment constraints, as EL imaging systems may operate under limited sensor resolution, acquisition time, and hardware cost.
To ensure a fair and systematic comparison across architectures, all RGB-transformed EL images are resized to the operational input resolutions associated with each model. Let R() be the resizing function based on bilinear interpolation:
X ~ i R G B = R ( X i R G B , H , W )     X ~ i R G B R 224 × 224 × 3 R 300 × 300 × 3
where
( H , W ) ( 224,224 ) , ( 300,300 )
These resolutions correspond to widely adopted configurations in modern CNN architectures and allow the assessment of classification stability under reduced spatial detail. By comparing performance and runtime across these resolutions, this study directly evaluates whether lower-resolution RGB EL input associated with reduced computational load and lower sensor cost can preserve or even enhance defect classification accuracy relative to higher-resolution alternatives.

3.3. Deep Learning-Based Defect Classification

To evaluate the effectiveness of the proposed defect-aware RGB representation and to analyze the impact of image resolution on classification performance and computational cost, three representative convolutional neural network (CNN) architectures were employed: ResNet–50, EfficientNet–B0, and EfficientNet–B3. These models were selected due to their proven robustness in industrial vision tasks, scalability across resolutions, and widespread adoption in defect detection applications [24,25].
All networks were initialized with ImageNet pre-trained weights, enabling transfer learning to accelerate convergence and reduce the dependence on large-scale labeled EL datasets [26,27]. In this study, CNNs are not merely used as classifiers, but as quantitative validation tools to assess whether the proposed preprocessing strategy improves defect discriminability under different spatial resolutions.
Let
X i R H 0 × W 0 × 1 ,
denote a raw grayscale EL image. Unlike conventional approaches that replicate the grayscale channel to satisfy CNN input requirements, this work applies a defect-aware pseudo-color transformation prior to resizing. The resulting RGB image is defined as follows:
X i R G B R H × W × 3
where the three channels encode defect-related intensity information derived from CLAHE-enhanced local percentile segmentation. This transformation embeds physically meaningful defects directly into the color space rather than duplicating intensity values.
Each CNN learns a nonlinear mapping:
f θ : X i R G B y ^ ,
y ^ 0 , 1
where y ^ represents the predicted probability of the solar cell being defective. A sigmoid activation function is applied at the output layer:
y ^ = σ ( z ) = 1 1 + e z
And binary labels are assigned as follows:
c ^ = 1 ,   i f     y ^ τ   0 ,   i f   y ^   < τ  
where τ is a tuned decision threshold. This threshold calibration step is particularly important for the imbalanced nature of the ELPV dataset, where defective samples are underrepresented.
(A)
ResNet–50
Figure 6 presents the architecture of ResNet–50.
To mitigate vanishing gradients in deep models, ResNet–50 employs residual learning. A residual block computes
y = F ( x , W ) + x
allowing gradient propagation through skip connections. The input resolution required is 224 × 224 × 3. As an output, a deep semantic feature vector followed by a customized classification head consists of
  • Global Average Pooling (GPA);
  • Fully connected dense layers;
  • Sigmoid output neuron.
(B)
EfficientNet–B0
EfficientNet–B0 adopts a compound scaling strategy that uniformly scales network depth, width, and input resolution to achieve optimal accuracy–efficiency balance. The scaling is defined as follows:
d = α ϕ ,   ω = β ϕ ,   r = γ ϕ
where α ,   β   a n d   γ denote scaling factors, respectively, network depth (number of layers), width (number of channels per layer), and input image resolution (H,W), and ϕ is the compound coefficient controlling overall model size.
α . β 2 . γ   2 2 ,
ensuring that each increment of ϕ approximately doubles the total computational cost (FLOPs) while preserving a balanced growth among network capacity dimensions.
EfficientNet–B0 operates at an input resolution of 224 × 224 × 3, making it particularly suitable for evaluating the performance of defect-aware RGB representations under low-resolution, low-computation constraints, which are relevant for industrial deployment scenarios.
(C)
EfficientNet–B3
EfficientNet–B3 extends B0 using higher compound scaling ( ϕ > 0 ) , enabling extraction of finer spatial features inherent in EL images. As an original input resolution, 300 × 300 × 3 was chosen [28].
In addition, when downscaled to 224 × 224 , the model remains operational but loses some spatial defect granularity. Thus, both resolutions were tested to assess resolution sensitivity. The architecture of the effecientNet-B3 model is presented in Figure 7.

3.4. Transfer Learning-Based Model Training

Each CNN architecture defines a learnable function:
y ^ = f θ ( X ^ ) ,   y ^ 0 ,   1
where y ^ indicates the probability of a cell being defective.
Binary Focal Loss is used to address class imbalance:
L f o c a l = α ( 1 y ^ ) γ y l o g ( y ^ ) ( 1 α ) y ^ γ ( 1 y ) l o g ( 1 y ^ )
with hyperparameters α = 0.25 ,   γ = 2 .
Optimization uses ADAM [29]:
θ t + 1 = θ t η L θ t
Learning rate is automatically adjusted using Reduce LR On Plateau, triggered when validation loss stagnates.
Class imbalance is weighted by
ω c = N 2 N c
where N c is number of samples in class c.

3.5. Thershold Calibration and Final Decision

The network score y ^ is converted to binary class through an optimized threshold τ :
c ^ = 1 ,   i f     y ^ τ   0 ,   O t h e r w i s e    
τ is selected to maximize validation accuracy:
τ = arg max A c c v a l ( τ )
This significantly improves the defective cell recall.

3.6. Performance Evaluation

Models are tested on D t e s t with τ .
Metrics:
A c c u r a c y = T P + T N T P + T N + F P + F N
F 1 = 2 T P 2 T P + F P + F N
I o U = T P T P + F P + F N
where T P ,   T N ,   F P , F N follow standard meanings.

4. Results and Discussions

All experiments were conducted on the defect-aware RGB-mapped EL dataset comprising 2624 electroluminescence (EL) images of crystalline silicon solar cells. The task is formulated as a binary classification problem, where each sample is assigned a label y i such that
y i = 1 ,   i f   p i   0.5   0 ,     o t h e r w i s e
The dataset exhibits a moderate class imbalance (72.7% functional, 27.3% defective), which was intentionally preserved to reflect realistic industrial inspection conditions. A stratified partitioning strategy was applied to maintain class balance across all subsets:
D t r a i n , D v a l , D t e s t = 70 % , 15 % , 15 %
Let
N t r a i n = 1836 , N v a l = 394 , N t e s t = 394
Ensuring
y i N y i t r a i n N t r a i n y i v a l N v a l y i t e s t N t e s t
To ensure the reliability and generalization capability of the developed classification models, a robust evaluation strategy was employed using multiple stratified random splits of the dataset. Four independent train–validation–test partitions were generated, each preserving the original class distribution to minimize sampling bias. In addition, validation and test sets were alternated across selected experiments to eliminate the possibility of favorable data partitioning affecting performance interpretation. This repeated-split evaluation enabled a stability assessment of the obtained metrics, confirming that performance—particularly that of EfficientNet–B3—remains consistent and is not contingent on a single dataset split.
Model performance was assessed using accuracy, F 1 -score, and Intersection over Union ( I o U ).

4.1. Quantitative Validation of RGB Representation Strategy

Before evaluating deep architectures, a preliminary experiment was conducted to objectively assess whether the proposed defect-aware RGB mapping effectively improves the discriminative quality of EL representations.
A lightweight CNN was first trained using EL images transformed with a baseline grayscale-to-RGB pseudo-color mapping. When trained on 50% of the dataset, the model achieved an accuracy of 72.59%, while the defective-class F 1 -score and I o U both remained at 0.00%. The corresponding confusion matrix (Figure 8) shows that the network consistently predicted the majority class (functional) and failed to identify defective samples. This behavior indicates that the classifier primarily learned features associated with visually prominent busbars, which were incorrectly encoded as defect-like structures in the baseline RGB representation. As a result, genuine defect patterns could not be effectively distinguished from conductive elements, leading to a collapse of discriminative information.
These findings confirm that the baseline pseudo-color encoding introduces semantic ambiguity by assigning similar color representations to non-defective conductive structures and true defect regions, rendering it unsuitable for reliable defect classification.
The same lightweight CNN architecture was subsequently trained using the proposed defect-aware RGB representation. In this case, performance improved substantially, reaching an accuracy of 84.01%, a defective-class F 1 -score of 68.02%, and a defective-class I o U of 51.54%. The confusion matrix (Figure 9) demonstrates balanced predictions across both functional and defective classes, indicating that the model is now able to extract defect-relevant features effectively.
This experiment provides quantitative evidence that the proposed defect-aware transformation restores the discriminative structure lost in the baseline representation while preserving physical interpretability. On this basis, the defect-aware RGB representation was adopted as the standard input format for all subsequent experiments involving deeper neural architectures.

4.2. Quantitative Comparison of Classification Performance

Table 5 summarizes the test performance of the three models using their best-performing configurations.
EfficientNet–B3 achieves the highest overall performance across all evaluated configurations, reaching 92.39% accuracy and the strongest defective-class metrics at an input resolution of 300 × 300. When the input resolution is reduced to 224 × 224, EfficientNet–B3 still maintains high performance, with an accuracy of 91.88%, indicating limited degradation despite significant spatial downsampling.
Notably, EfficientNet–B0 also demonstrates strong performance at lower resolution. At 224 × 224, it already exceeds 89% accuracy and further improves to 89.85% at 300 × 300. These results indicate that, once an informative defect-aware RGB representation is employed, competitive classification performance can be achieved even under reduced spatial resolution, while higher resolutions provide consistent but incremental gains.

4.3. Stability Across Stratified Splits

The dataset was partitioned using a stratified split of 70% for training, 15% for validation, and 15% for testing. This allocation was chosen to ensure that the training subset contains a sufficiently large and diverse set of samples to learn defect-related patterns, while maintaining independent validation and test subsets for model tuning and unbiased performance evaluation.
The validation subset was used exclusively for hyperparameter selection, early stopping, and model selection, thereby limiting overfitting and preventing information leakage from the test data. The test subset remained completely unseen during training and optimization to provide an objective estimate of generalization performance. Stratification was applied to all subsets to preserve the original class distribution, which is particularly important given the moderate class imbalance of the dataset.
To assess the robustness and generalization capability of the proposed models, performance stability was evaluated across four independent stratified train–validation–test splits generated using different random seeds. This analysis is essential to verify that the reported results are not an artifact of a favorable data partition but instead reflect consistent model behavior under varying sampling conditions.
As illustrated in Figure 10 and Figure 11, test accuracy remains stable across splits for all three architectures. For EfficientNet–B3, accuracy ranges between approximately 88.1% and 91.4%, with a peak performance of 92.39% observed in the best-performing configuration reported in Table 5. This corresponds to a total variation of less than 3.5 percentage points across splits, indicating limited sensitivity to the specific data partitioning. Similarly, EfficientNet–B0 exhibits test accuracies between 85.8% and 87.2%, while ResNet–50 ranges between approximately 86.5% and 88.4%. Such constrained variability demonstrates that model performance is not driven by a particular choice of training or testing samples.
Defective-class metrics further support this observation. For EfficientNet–B3, the defective-class F 1 -score consistently remains above 0.74 and reaches value close to 0.83, while I o U varies between approximately 0.59 and 0.70 across splits. EfficientNet–B0 maintains F 1 -scores between 0.72 and 0.76 and I o U values between 0.57 and 0.62, whereas ResNet–50 achieves F 1 -scores in the range of 0.75 to 0.80 and I o U between 0.60 and 0.67. These limited fluctuations confirm that the models preserve their ability to detect defective samples even when the underlying data partitions are modified.
A similar stability pattern is observed for defective-class precision and recall. For EfficientNet–B3, precision consistently remains high, between approximately 0.89 and 0.95, while recall varies between 0.64 and 0.76, indicating a stable balance between false positives and false negatives. Comparable trends are observed for EfficientNet–B0 and ResNet–50. This consistency confirms that the models do not simply achieve high accuracy by favoring the majority class but instead demonstrate reliable defect sensitivity.
Overall, these results demonstrate that the observed performance gains are systematic, reproducible, and robust to dataset partitioning, thereby strengthening the validity of the conclusions and supporting the generalization capability of the proposed framework.

4.4. Impact of RGB Representation, Resolution, and Class Imbalance

The numerical results indicate clear performance differences between the evaluated configurations. Models trained using the defect-aware RGB-mapped EL images consistently achieve higher defective-class F 1 -score and I o U compared to the baseline grayscale-to-RGB mapping evaluated in Section 4.2, where defective-class performance collapsed. Across the three evaluated architectures, defective-class F 1 -scores range from 76.7% (ResNet–50) to 84.54% (EfficientNet–B3), while I o U values range from 62.1% to 73.21%, respectively.
The influence of spatial resolution can be observed by comparing model performance at different input sizes. EfficientNet–B3 trained at 300 × 300 achieves the highest accuracy of 92.39%, while EfficientNet–B0 and ResNet–50 trained at 224 × 224 maintain accuracies of 89.0% and 87.06%, respectively. For reference, a recent study using the same ELPV dataset reported an accuracy of 88.4% with a VGG19-based model trained on grayscale EL images at 300 × 300 resolution. This comparison indicates that competitive performance is obtained even at lower spatial resolution when using the proposed representation.
A more detailed resolution–performance comparison is provided in Table 6. EfficientNet–B0 achieves 89.00% accuracy at 224 × 224 and improves modestly to 89.85% at 300 × 300, while EfficientNet–B3 increases from 91.88% to 92.39% when moving from 224 × 224 to 300 × 300. These results indicate that resolution-related performance gains are consistent but incremental. In contrast, ResNet–50 shows slightly lower accuracy at 300 × 300 compared to 224 × 224, suggesting that higher resolution does not necessarily benefit all architectures equally and may increase optimization difficulty under fixed training constraints.
The impact of resolution on computational cost is summarized in Table 6. For all architectures, reducing the input size from 300 × 300 to 224 × 224 leads to a noticeable reduction in training time. For example, EfficientNet–B0 requires 410 s at 224 × 224 compared to 690 s at 300 × 300, while ResNet–50 decreases from 980 s to 620 s. Similar behavior is observed for EfficientNet–B3. These results show that lower spatial resolution reduces computational cost while maintaining competitive classification performance.
All simulations were conducted using the original class distribution of the dataset without artificial balancing. Despite the moderate class imbalance, defective-class metrics remain consistently high across all evaluated models, as reflected in the F 1 -score and I o U values reported above.
By jointly considering accuracy and computational cost, as shown in Table 6, a clear trade-off emerges between performance and efficiency. Strong classification performance is already achieved at 224 × 224 for ResNet–50 and EfficientNet–B0, supporting the practical relevance of the proposed representation for computationally efficient PV inspection.
While training time reflects computational cost during model development, inference time is the critical factor for real-time industrial deployment. In practical EL inspection systems, overall throughput is governed by the complete pipeline, including sensor exposure, data transfer, preprocessing, and inference, with reported acquisition times ranging from tens of milliseconds to several seconds depending on system configuration. From a deep learning perspective, inference latency scales with network depth, parameter count, and input resolution [30,31]. Consequently, the relative trends observed in training time across architectures and resolutions are expected to translate into similar inference-time behavior.
Overall, the results demonstrate that combining defect-aware RGB encoding with optimized resolution selection provides a robust and computationally efficient framework for EL-based PV defect detection, outperforming color-based approaches while significantly reducing processing cost.

5. Conclusions

This work proposes a novel approach for photovoltaic defect classification using electroluminescence (EL) images by integrating local contrast enhancement, defect-aware RGB false-color mapping, and transfer learning-based deep neural networks. Unlike conventional pipelines that simply replicate grayscale EL images into three channels for backbone compatibility, the proposed preprocessing encodes physically meaningful intensity ranges into distinct color channels. This design explicitly emphasizes cracks, inactive regions, healthy silicon emission, and conductive pathways, which improves defect visibility and enhances feature separability for convolutional models. As observed in the numerical results, this representation is associated with consistently higher defective-class precision, F 1 -score, and I o U across all evaluated architectures compared to baseline mappings.
Comprehensive simulations conducted on the ELPV benchmark dataset show that the proposed approach achieves strong and competitive performance relative to previously reported methods. A maximum accuracy of 92.39% is obtained using EfficientNet–B3 at 300 × 300 resolution, exceeding the 88.4% accuracy reported in the literature for a VGG19-based model trained on grayscale EL images at the same resolution. Importantly, comparable—and in some cases superior—performance is also obtained at a lower resolution of 224 × 224, where EfficientNet–B0 and ResNet–50 achieve accuracies of 89.0% and 87.06%, respectively. These results suggest that performance gains are primarily associated with improved defect representation rather than increased spatial resolution alone.
The numerical analysis further indicates that spatial resolution directly affects computational cost. Reducing the input size from 300 × 300 to 224 × 224 decreases training time by approximately 35–40% across all evaluated models while preserving competitive classification performance. In addition, all simulations were conducted using the original imbalanced data distribution to reflect realistic industrial conditions, with class imbalance handled through focal loss, class weighting, and threshold calibration rather than artificial resampling. Taken together, these results indicate that combining defect-aware representation with resolution-aware model design yields an effective and computationally efficient framework for EL-based photovoltaic defect classification, with practical relevance for scalable and cost-sensitive inspection systems.
Finally, the scope of this study is constrained by the native resolution of the available benchmark dataset. While down-sampling-based resolution analysis is feasible, investigating a broader range of natively higher-resolution EL images, as commonly encountered in industrial inspection systems, represents an important next step. Extending the proposed framework to such multi-resolution datasets would enable a more comprehensive assessment of resolution effects and inference-time behavior. In addition, while the method is directly applicable to silicon-based photovoltaic technologies, adapting it to emerging PV technologies will require further investigation into technology-specific EL characteristics and defect mechanisms. These directions are identified as key avenues for future work.

Author Contributions

Conceptualization, D.G.; methodology, F.E.-Z.; software, F.E.-Z.; validation, D.G., F.E.-Z. and Ł.C.; formal analysis, D.G.; investigation, F.E.-Z.; resources, Ł.C., F.E.-Z. and F.B.; data curation, F.E.-Z.; writing—original draft preparation, F.E.-Z. and D.G.; writing—review and editing, F.E.-Z., Ł.C. and D.G.; visualization, F.E.-Z.; supervision, D.G.; project administration, D.G.; funding acquisition, D.G. and F.E.-Z. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by the Polish Ministry of Science and Higher Education, partially through statutory activity and partially by Young Researchers funds from the Faculty of Automatic Control, Electronics and Computer Science at the Silesian University of Technology. The work was also co-financed by the Just Transition Fund through the development of the Joint Doctoral School and scientific activities of doctoral students related to the needs of the green and digital economy (Project No. FESL.10.2025-IZ.01-07E7/23), implemented at the Silesian University of Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Köntges, M.; Kunze, I.; Kajari-Schröder, S.; Breitenmoser, X.; Bjørneklett, B. The Risk of Power Loss in Crystalline Silicon Based Photovoltaic Modules Due to Micro-Cracks. Sol. Energy Mater. Sol. Cells 2011, 95, 1131–1137. [Google Scholar] [CrossRef]
  2. Morlier, A.; Siebert, M.; Kunze, I.; Mathiak, G.; Köntges, M. Detecting Photovoltaic Module Failures in the Field during Daytime with Ultraviolet Fluorescence Module Inspection. IEEE J. Photovolt. 2017, 7, 1710–1716. [Google Scholar] [CrossRef]
  3. Parikh, H.R.; Buratti, Y.; Spataru, S.; Villebro, F.; Reis Benatto, G.A.D.; Poulsen, P.B.; Wendlandt, S.; Kerekes, T.; Sera, D.; Hameiri, Z. Solar Cell Cracks and Finger Failure Detection Using Statistical Parameters of Electroluminescence Images and Machine Learning. Appl. Sci. 2020, 10, 8834. [Google Scholar] [CrossRef]
  4. Jordan, D.C.; Kurtz, S.R. Photovoltaic Degradation Rates—An Analytical Review. Prog. Photovolt. Res. Appl. 2013, 21, 12–29. [Google Scholar] [CrossRef]
  5. Cardinale-Villalobos, L.; Meza, C.; Méndez-Porras, A.; Murillo-Soto, L.D. Quantitative Comparison of Infrared Thermography, Visual Inspection, and Electrical Analysis Techniques on Photovoltaic Modules: A Case Study. Energies 2022, 15, 1841. [Google Scholar] [CrossRef]
  6. Bu, C.; Bai, W.; Huang, X.; Chen, P.; Shen, R.; Li, R.; Liu, G.; Tang, Q. Infrared Thermography Detection of Defects in CFRP Based on a Time-Domain Nonlinear Regression Algorithm. Russ. J. Nondestruct. Test. 2025, 61, 244–255. [Google Scholar] [CrossRef]
  7. Abdelsattar, M.; Abdelmoety, A.; Ismeil, M.A.; Emad-Eldeen, A. Automated Defect Detection in Solar Cell Images Using Deep Learning Algorithms. IEEE Access 2025, 13, 4136–4157. [Google Scholar] [CrossRef]
  8. Pillai, D.S.; Ram, J.P.; Garcia, J.L.; Kim, Y.-J.; Catalão, J.P. Experimental Studies on a New Array Design and Maximum Power Tracking Strategy for Enhanced Performance of Soiled Photovoltaic Systems. IEEE Trans. Power Electron. 2023, 39, 1596–1608. [Google Scholar] [CrossRef]
  9. Trupke, T.; Bardos, R.A.; Schubert, M.C.; Warta, W. Photoluminescence Imaging of Silicon Wafers. Appl. Phys. Lett. 2006, 89, 044107. [Google Scholar] [CrossRef]
  10. Cardinale-Villalobos, L.; Murillo-Soto, L.D.; Brenes, R. Low-Cost IoT System Prototype to Detect Supbotimal Conditions in PV Arrays. In Proceedings of the Ibero-American Congress of Smart Cities, San Carlos, Costa Rica, 12–14 November 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 3–17. [Google Scholar]
  11. Deitsch, S.; Christlein, V.; Berger, S.; Buerhop-Lutz, C.; Maier, A.; Gallwitz, F.; Riess, C. Automatic Classification of Defective Photovoltaic Module Cells in Electroluminescence Images. Sol. Energy 2019, 185, 455–468. [Google Scholar] [CrossRef]
  12. Deitsch, S.; Buerhop-Lutz, C.; Sovetkin, E.; Steland, A.; Maier, A.; Gallwitz, F.; Riess, C. Segmentation of Photovoltaic Module Cells in Uncalibrated Electroluminescence Images. Mach. Vis. Appl. 2021, 32, 84. [Google Scholar] [CrossRef]
  13. Abdollahi-Mamoudan, F.; Ibarra-Castanedo, C.; Maldague, X.P.V. Non-Destructive Testing and Evaluation of Hybrid and Advanced Structures: A Comprehensive Review of Methods, Applications, and Emerging Trends. Sensors 2025, 25, 3635. [Google Scholar] [CrossRef] [PubMed]
  14. Sun, J.; Zhang, Y.; Sun, Y.; Fang, H.; Xiao, Z.; Jiang, H. Wind Speed Prediction of Complex Terrain Based on Multi-Dimensional Time Series Decomposition. IEEE Access 2025, 13, 206475–206489. [Google Scholar] [CrossRef]
  15. Abdelsattar, M.; AbdelMoety, A.; Emad-Eldeen, A. Applying Image Processing and Computer Vision for Damage Detection in Photovoltaic Panels. Mansoura Eng. J. 2025, 50, 2. [Google Scholar] [CrossRef]
  16. Zhao, Y.; Zhang, L.; Liu, Y.; Deng, Z.; Zhang, R.; Zhang, S.; He, W.; Qiu, Z.; Zhao, Z.; Tang, B.Z. AIEgens in Solar Energy Utilization: Advances and Opportunities. Langmuir 2022, 38, 8719–8732. [Google Scholar] [CrossRef]
  17. Pratt, C.Z.; Ray, K.J.; Crutchfield, J.P. Controlled Erasure as a Building Block for Universal Thermodynamically Robust Superconducting Computing. Chaos Interdiscip. J. Nonlinear Sci. 2025, 35, 043112. [Google Scholar] [CrossRef]
  18. Wang, J.; Bi, L.; Sun, P.; Jiao, X.; Ma, X.; Lei, X.; Luo, Y. Deep-Learning-Based Automatic Detection of Photovoltaic Cell Defects in Electroluminescence Images. Sensors 2022, 23, 297. [Google Scholar] [CrossRef]
  19. Munawer Al-Otum, H. Classification of Anomalies in Electroluminescence Images of Solar PV Modules Using CNN-Based Deep Learning. Sol. Energy 2024, 278, 112803. [Google Scholar] [CrossRef]
  20. Damian, G.; Fatima, E.-Z.; Łukasz, C.; Bian, F. Analysis of EL Image Resolution on Photovoltaic Modules Defect Detection. Proc. IAC Prague 2025, 2025, 306. [Google Scholar]
  21. Spataru, S.V.; Sera, D.; Hacke, P.; Kerekes, T.; Teodorescu, R. Fault Identification in Crystalline Silicon PV Modules by Complementary Analysis of the Light and Dark Current–Voltage Characteristics. Prog. Photovolt. Res. Appl. 2016, 24, 517–532. [Google Scholar] [CrossRef]
  22. Tsai, D.-M.; Wu, S.-C.; Chiu, W.-Y. Defect Detection in Solar Modules Using ICA Basis Images. IEEE Trans. Ind. Inform. 2012, 9, 122–131. [Google Scholar] [CrossRef]
  23. Fontanilla Echeveste, M.T.; Ripollés González, T.; Aguirre Pascual, E. Contrast-Enhanced Ultrasound Fundamentals: The Pharmacodynamics and Pharmacokinetics of Contrast. Basics of Contrast-Enhanced Ultrasound Imaging. Radiología 2024, 66, S36–S50. [Google Scholar] [CrossRef]
  24. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar] [CrossRef]
  25. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar] [CrossRef]
  26. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
  27. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  28. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. arXiv 2017, arXiv:1707.01083. [Google Scholar] [CrossRef]
  29. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  30. Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Volume 1. [Google Scholar]
  31. Zhang, C.; Zhang, F.; Chen, K.; Chen, M.; He, B.; Du, X. Edgenn: Efficient Neural Network Inference for Cpu-Gpu Integrated Edge Devices. In Proceedings of the 2023 IEEE 39th International Conference on Data Engineering (ICDE), Anaheim, CA, USA, 3–7 April 2023; IEEE: New York, NY, USA, 2023; pp. 1193–1207. [Google Scholar]
Figure 1. Representative examples of functional and defective cells from the original dataset. (ac) Functional cells, (d) cracks, (e) inactive area, (f) finger break.
Figure 1. Representative examples of functional and defective cells from the original dataset. (ac) Functional cells, (d) cracks, (e) inactive area, (f) finger break.
Applsci 16 02148 g001
Figure 2. Flowchart of training pipeline.
Figure 2. Flowchart of training pipeline.
Applsci 16 02148 g002
Figure 3. Output samples of the baseline attempt at grayscale-to-RGB transformation.
Figure 3. Output samples of the baseline attempt at grayscale-to-RGB transformation.
Applsci 16 02148 g003
Figure 4. Output samples of the final adopted version to transform grayscale-to-RGB.
Figure 4. Output samples of the final adopted version to transform grayscale-to-RGB.
Applsci 16 02148 g004
Figure 5. Lightweight CNN architecture used for validating the RGB representation strategy.
Figure 5. Lightweight CNN architecture used for validating the RGB representation strategy.
Applsci 16 02148 g005
Figure 6. ResNet–50 architecture (ImageNet pre-trained, fine-tuned).
Figure 6. ResNet–50 architecture (ImageNet pre-trained, fine-tuned).
Applsci 16 02148 g006
Figure 7. EfficientNet–B3 architecture (ImageNet pre-trained, fine-tuned).
Figure 7. EfficientNet–B3 architecture (ImageNet pre-trained, fine-tuned).
Applsci 16 02148 g007
Figure 8. Confusion matrix of the lightweight CNN classifier using the baseline grayscale-to-RGB mapped EL images.
Figure 8. Confusion matrix of the lightweight CNN classifier using the baseline grayscale-to-RGB mapped EL images.
Applsci 16 02148 g008
Figure 9. Confusion matrix of the lightweight CNN trained on the proposed defect-aware RGB-mapped EL dataset.
Figure 9. Confusion matrix of the lightweight CNN trained on the proposed defect-aware RGB-mapped EL dataset.
Applsci 16 02148 g009
Figure 10. Performance consistency across multiple stratified train–validation–test splits for EfficientNet–B3.
Figure 10. Performance consistency across multiple stratified train–validation–test splits for EfficientNet–B3.
Applsci 16 02148 g010
Figure 11. Performance consistency across multiple stratified train–validation–test splits for ResNet–50 and EfficientNet–B0.
Figure 11. Performance consistency across multiple stratified train–validation–test splits for ResNet–50 and EfficientNet–B0.
Applsci 16 02148 g011
Table 1. The physical manifestation of major defect types in identifiable EL images.
Table 1. The physical manifestation of major defect types in identifiable EL images.
Defect TypeEL Visual PatternPhysical OriginImpact
MicrocracksThin dark fractures Mechanical stress, cell handling Power drop, hotspots
Inactive areas Dark patches Metallization issues Severe performance loss
Finger break Dark discontinuity along finger Grid interruption Increased resistance
Dislocation clustersDark granular shapesMaterial impurity Local degradation
Table 2. Distribution of defect-type combinations in the subset of defective EL images with explicit defect-type annotation.
Table 2. Distribution of defect-type combinations in the subset of defective EL images with explicit defect-type annotation.
Defect CombinationNumber of Cells
Microcracks + Inactive Areas + Finger Breaks48
Microcracks + Inactive Areas65
Microcracks + Finger Breaks57
Inactive Areas + Finger Breaks39
Microcracks (only)303
Inactive Areas (only)145
Finger Breaks (only)58
Table 3. Quantitative comparison of percentile configurations.
Table 3. Quantitative comparison of percentile configurations.
Configurationm1 (%)m2 (%)m3 (%)m4 (%)Separation
10–50–909.739.540.410.432.18
20–40–8019.419.940.020.725.40
25–50–7524.424.825.125.723.03
15–45–8514.629.640.315.528.29
Table 4. Pixel intensities are assigned to four defect-related color bands.
Table 4. Pixel intensities are assigned to four defect-related color bands.
MaskConditionAssigned ColorPhysical Meaning
m1 X i C L A H E < t 1 c 1 = ( 150 , 0 , 0 ) dark-redSevere cracks/inactive areas
m2 t 1 < X i C L A H E < t 2 c 2 = ( 255 , 140 , 0 ) orangeMild degradation and stress
m3 t 2 < X i C L A H E < t 3 c 3 = ( 0 , 200 , 80 ) ) greenHealthy silicon emission
m4 X i C L A H E t 3 c 4 = ( 30 , 80 , 255 ) ) blueStrong conductive pathway (fingers)
Table 5. Performance comparison of evaluated CNN models on the defect-aware RGB-mapped EL dataset.
Table 5. Performance comparison of evaluated CNN models on the defect-aware RGB-mapped EL dataset.
ModelInput ResolutionAccuracy (%)F1 (Defective) (%)IoU (Defective) (%)
ResNet–50224 × 22487.0676.762.1
ResNet–50300 × 30085.2877.1762.80
EfficientNet–B0224 × 22489.079.066.0
EfficientNet–B0300 × 30089.8582.3069.90
EfficientNet–B3224 × 22491.8882.6170.37
EfficientNet–B3300 × 30092.3984.5473.21
Table 6. Joint comparison of training time and classification accuracy across input resolutions.
Table 6. Joint comparison of training time and classification accuracy across input resolutions.
ModelInput ResolutionTraining Time (s)Accuracy (%)
ResNet–50224 × 22462087.06
ResNet–50300 × 30098085.28
EfficientNet–B0224 × 22441089.00
EfficientNet–B0300 × 30069089.85
EfficientNet–B3224 × 22482091.88
EfficientNet–B3300 × 300135092.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Grzechca, D.; Ez-Zahiri, F.; Chruszczyk, Ł.; Bian, F. Defect-Aware RGB Representation and Resolution-Efficient Deep Learning for Photovoltaic Failure Detection in Electroluminescence Images. Appl. Sci. 2026, 16, 2148. https://doi.org/10.3390/app16042148

AMA Style

Grzechca D, Ez-Zahiri F, Chruszczyk Ł, Bian F. Defect-Aware RGB Representation and Resolution-Efficient Deep Learning for Photovoltaic Failure Detection in Electroluminescence Images. Applied Sciences. 2026; 16(4):2148. https://doi.org/10.3390/app16042148

Chicago/Turabian Style

Grzechca, Damian, Fatima Ez-Zahiri, Łukasz Chruszczyk, and Fei Bian. 2026. "Defect-Aware RGB Representation and Resolution-Efficient Deep Learning for Photovoltaic Failure Detection in Electroluminescence Images" Applied Sciences 16, no. 4: 2148. https://doi.org/10.3390/app16042148

APA Style

Grzechca, D., Ez-Zahiri, F., Chruszczyk, Ł., & Bian, F. (2026). Defect-Aware RGB Representation and Resolution-Efficient Deep Learning for Photovoltaic Failure Detection in Electroluminescence Images. Applied Sciences, 16(4), 2148. https://doi.org/10.3390/app16042148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop