Next Article in Journal
Dynamic Golf Swing Analysis Framework Based on Efficient Similarity Assessment
Previous Article in Journal
AI-Driven Cybersecurity in IoT: Adaptive Malware Detection and Lightweight Encryption via TRIM-SEC Framework
Previous Article in Special Issue
A Training-Free Foreground–Background Separation-Based Wire Extraction Method for Large-Format Transmission Line Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Unsupervised Image Enhancement Framework for Multiple Fault Detection of Insulators

1
School of Electronics and Electrical Engineering, Wuhan Textile University, Wuhan 430200, China
2
Department of Electrical Engineering and Automation, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(22), 7071; https://doi.org/10.3390/s25227071
Submission received: 21 October 2025 / Revised: 13 November 2025 / Accepted: 18 November 2025 / Published: 19 November 2025

Abstract

To address the problem of low detection accuracy caused by uneven brightness distribution in transmission line inspection images under complex lighting conditions, this paper proposes an unsupervised image enhancement method that integrates grayscale feature guidance and luminance consistency loss constraint. First, a U-shaped generator combining a bottleneck structure with large receptive field depthwise separable convolutions is designed to efficiently extract multi-scale features. Second, a grayscale feature-guided image generation module is incorporated into the generator, using grayscale information to adaptively enhance local low-light regions and effectively suppress overexposed regions. Meanwhile, to accommodate the characteristics of unpaired data training, a luminance consistency loss is introduced. By constraining the global luminance distribution consistency between the generated image and the reference image, the overall brightness balance of the generated image is improved. Finally, a multi-level discriminator structure is constructed to enhance the model’s ability to distinguish global and local luminance in the generated images. Experimental results show that the proposed method significantly improves image quality (PSNR increased from 7.73 to 18.41, SSIM increased from 0.43 to 0.85). Furthermore, the enhanced images lead to improvements in defect detection accuracy.

1. Introduction

Power transmission lines are a cornerstone of modern infrastructure, responsible for delivering electrical energy from generation sources to end users. As critical components, insulators electrically isolate conductors from supporting structures, thereby preventing leakage currents and maintaining the safety and stability of the grid. However, faults in insulators can significantly threaten the reliability of power systems. Such defects often arise from environmental contamination, mechanical damage, or flashover events. Therefore, systematic inspection and maintenance of insulators are essential to sustain their performance and ensure reliable operation under diverse environmental conditions.
Deep learning, particularly in the field of computer vision, has significantly advanced various industrial applications, including those in the power sector [1,2,3]. A common method involves using UAVs to capture transmission line inspection images, which are then processed by AI algorithms for fault detection and localization [4]. When datasets are insufficient, many studies enrich their data by transferring target objects into diverse scenes. For example, TAO [5] implemented a cascaded CNN on the CPLID (Chinese Power Line Insulator Dataset) by embedding the same insulator targets into various backgrounds, achieving a precision of 91% and recall of 96%. Similarly, Jiang [6] developed a weakly supervised segmentation-based fusion detection algorithm, using YOLOv5 after segmenting and migrating target objects. Although these data augmentation strategies enhance sample variety, they still struggle to meet the complexity demands of re-al-world scenarios due to limited sample diversity.
When processing large-scale datasets, researchers primarily focus on factors that influence detection accuracy, particularly complex backgrounds and multi-scale defects. Attention mechanisms are commonly employed to distinguish targets from background clutter, emphasizing salient regions while suppressing irrelevant information [7,8]. For example, Zhou [9] integrated a Convolutional Block Attention Module (CBAM [10]) into YOLOv5, using max-pooling and average-pooling at both channel and spatial levels, and achieved a 98% average detection accuracy for glass insulator targets. Similarly, Hao [11] incorporated a CSPResNeSt backbone and a three-dimensional attention module (SiMAM [12]) into YOLOv4, improving insulator detection accuracy by 3.5% in complex back-grounds. However, these methods primarily focus on detecting intact insulators rather than identifying faults. Detecting small-scale fault regions remains challenging due to variations in target size resulting from differences in capture distance. Such diminished targets often degrade model performance. To address this, researchers optimize feature fusion modules for multi-scale integration. Zhao [13], for instance, employed a Feature Pyramid Network (FPN [14]) to enhance Faster-RCNN [15] and improve insulator localization accuracy. Zhang [16], working with YOLOv3, introduced a multi-scale dense connection strategy that preserves critical feature details during convolution, improving small-target detection by retaining finer-scale features within the same stage rather than mixing features across different stages.
Despite the advancements in enhancing detection accuracy and handling complex backgrounds, addressing uneven illumination in challenging remains a major technical bottleneck. Under conditions such as nighttime or adverse weather, conventional visible light imaging often fails to provide sufficient quality for accurate insulator detection. Moreover, environmental factors including pollution, snow, and fog further degrade image visibility, making low-light detection particularly difficult. As noted in [17], research on weak-light enhancement for UAV aerial images of power transmission lines is limited. Although a contrast-adaptive histogram equalization model was proposed, which separates color and brightness through color-space con-version and balances contrast parameters to suppress noise, its robustness in real-world complex conditions remains uncertain.
To handle low-light complexity, deep learning-based methods have emerged. For instance, Liu’s IA-YOLO algorithm [18] and Zhang’s causality-based adaptive detection algorithm [19] preprocess low-light images before performing target detection. These approaches yield high accuracy but suffer from time-consuming preprocessing steps, reducing computational efficiency. Additionally, Zhang’s [20] approach for foggy weather scenarios multiplies the dataset size by a factor of six to simulate various fog concentrations and then applies an improved YOLOv5 model for detection. While this data augmentation technique increases image quantity, it raises concerns about potential overfitting and lacks an explicit, robust preprocessing step tailored for foggy conditions. Thus, there remains a pressing need for methods that effectively enhance image quality under uneven illumination without sacrificing efficiency or generalizability.
To address these challenges, this paper investigates the aforementioned issues in depth. First, we propose an insulator fault detection method for complex environments that integrates low-light enhancement and overexposure correction. The method mitigates problems of uneven illumination by designing a deep-learning–based adversarial network to preprocess raw images, thereby enhancing the visual quality of target regions. Furthermore, the object detection algorithm is optimized to handle complex backgrounds and small-scale targets, ultimately improving the accuracy of insulator fault detection under harsh environmental conditions. The main contributions of this work are summarized as follows:
(i)
A fault detection model for insulators in transmission lines under low-light and high-exposure environments is developed, covering four categories: normal, self-explosion, damage, and flashover;
(ii)
A grayscale feature-guided image generation structure is proposed to achieve fine-tuning of the brightness of local areas of the image through grayscale features;
(iii)
For unpaired data training scenarios, a brightness consistency constraint loss is constructed to improve the balance of the overall brightness distribution of the image by constraining the global brightness of the generated image and the reference image.

2. Model Overall Structure Design

The overall structure is shown in Figure 1. In this chapter, a generator model is first designed by combining the U-Net architecture with a bottleneck feature extraction module (FEM) to efficiently capture multi-scale features under complex lighting conditions. The generator employs a combination of 1 × 1 convolution and 5 × 5 depthwise separable convolution, which both expands the receptive field and effectively reduces the model’s computational complexity. To further optimize the generator’s adaptability to complex lighting conditions, an attention mechanism guided by grayscale features is designed, enhancing luminance in target regions while suppressing luminance in background regions, thereby achieving adaptive local brightness adjustment across the image. Additionally, for the generated images, a composite loss function consisting of adversarial loss, self-feature preservation loss, and luminance consistency loss is constructed. The adversarial loss is used to optimize the overall quality of the generated images, while the self-feature preservation loss and luminance consistency loss ensure that the brightness and structural features of the enhanced images are not significantly distorted. This enables the enhancement of visual quality while preserving critical detail features. Finally, multi-patch discrimination is performed on the generated images to ensure consistency in both local details and overall luminance. Based on the above structural design and optimization methods, this chapter proposes an illumination enhancement method that is both efficient and generalizable. It effectively mitigates the negative impact of complex lighting conditions on the quality of transmission line inspection images, significantly enhancing the recognizability of target regions under both overexposed and low-light scenarios. Moreover, it provides stable, high-quality image inputs that facilitate subsequent fault detection tasks. The following sections describe the detailed design and optimization process of the proposed model.

2.1. Deep Bottleneck Network Generator and Multi-Patch Discriminator

(1)
Generator encoding and decoding structure design
In the processing of transmission line inspection images, complex lighting conditions are primarily characterized by brightness imbalance caused by the coexistence of overexposed and low-illumination regions. This imbalance leads to the loss of local details in images, which in turn severely affects feature extraction and the accurate localization of fault regions. Therefore, the design of the generator structure must effectively balance the extraction of local details and luminance features to achieve brightness balancing and enhancement. To this end, the U-Net architecture, known for its strong feature extraction and reconstruction capabilities, is selected as the basic framework for the generator, combined with a bottleneck feature extraction module to further optimize feature representation. The U-Net framework, with its distinctive encoder–decoder architecture, can simultaneously capture and reconstruct local details, making it particularly effective for image enhancement tasks [21]. During the encoder stage, the model progressively downsamples the input image to obtain feature representations at different scales; in the decoder stage, it gradually upsamples the features to restore detailed information of the image.
To address the problem of high computational complexity in the traditional U-Net structure, this study designs a bottleneck feature extraction module, as shown in Figure 2. By combining 1 × 1 convolutions with 5 × 5 depthwise separable convolutions, the module achieves dimensionality reduction along the channel dimension and improves computational efficiency. Specifically, 1 × 1 convolutions are first used to reduce the channel dimension of the feature map, thereby compressing the feature space and lowering the computational complexity of subsequent convolution operations. Then, 5 × 5 depthwise separable convolutions are employed to expand the receptive field while maintaining low computational cost, effectively capturing a broader range of local detail features. This combination not only reduces model complexity but also enhances the model’s ability to perceive regions with high exposure or low illumination. During the encoding and decoding processes, the model captures multi-scale features through downsampling operations and progressively restores image details through upsampling operations. In the downsampling stage, the model compresses image dimensions to enhance high-level feature representation of brightness distribution; in the upsampling stage, deep features and shallow detail features are effectively fused through skip connections to restore and reconstruct the fine structure of the image. The overall network structure is illustrated in Figure 3.
(2)
Discriminator structure design
To further improve the quality of brightness enhancement in the generated images and to avoid detail distortion or local over-enhancement, this study introduces a multi-patch discriminator structure. The discriminator evaluates both the global and local regions of the generated images to ensure consistency in brightness enhancement at both the overall and local levels. The discriminator network adopts VGG as the backbone structure and integrates judgments based on both the global and local views of the image to enhance the overall assessment of brightness quality [22].
Specifically, the global discriminator focuses on evaluating the brightness balance across the entire image, ensuring that the overall brightness distribution and visual effect are consistent, and preventing issues such as overexposure or excessive darkness. The local discriminator randomly crops patches from the image and specifically evaluates the detail enhancement effects in high-exposure or low-light areas, ensuring that local details in the generated images are realistic without obvious distortions or artifacts. The collaborative operation of the global and local discriminators effectively guarantees the quality of brightness enhancement across different scales and regions, significantly improving the model’s adaptability to complex lighting scenarios and enhancing the realism and detail preservation of the output images. As shown in Figure 4, by introducing these two discriminators, the generated images receive more comprehensive supervision regarding their brightness effects, preventing over-enhancement or uneven enhancement issues either globally or locally.

2.2. Grayscale Feature-Guided Brightness Uneven Area Enhancement Algorithm

The model structure designed in the previous section primarily focuses on image generation and discrimination but does not specifically model the characteristics of lighting distribution. As a result, it still exhibits insufficient handling of local high-exposure and low-illumination regions, especially in areas with high contrast, where brightness imbalance remains evident. As shown in Figure 5, the HSV3D (Hue-Saturation-Value) visualization indicates that low-light images suffer from insufficient brightness, while high-exposure images exhibit excessively high brightness. When dealing with such complex scenes, the accurate extraction and effective processing of luminance information are critical for determining the quality of image enhancement. Therefore, this section further proposes an uneven brightness optimization algorithm based on grayscale feature guidance. By leveraging the advantages of low-dimensional representation in grayscale features, the algorithm aims to more accurately identify and focus on high-exposure and low-illumination regions, enabling adaptive local brightness adjustment and enhancement. This approach effectively addresses the problem of uneven brightness distribution in images.
Specifically, the proposed algorithm first converts the RGB image into a grayscale image to extract luminance distribution information, thereby avoiding interference from color channels and simplifying the subsequent enhancement process. Next, a grayscale attention-guided mechanism is designed, which effectively improves the model’s focus on regions with uneven brightness through grayscale feature weighting. For low-illumination areas, the model can precisely enhance the details with lower brightness; for high-exposure areas, it can accurately identify and moderately suppress excessive brightness, ensuring that the enhancement effect remains natural and realistic.
(1)
Design of the Grayscale Attention Mechanism
First, by applying a weighted combination of the RGB channel pixel values, the original color image is converted into a grayscale image. The weighting process is based on the human eye’s sensitivity to different color channels [23]. The specific calculation formula is given in Equation (1):
A G r a y = 0.299 × I R + 0.587 × I G + 0.144 × I B
Here, I denotes the feature map of the initial input image, and IR, IG, and IB represent the feature maps of the red, green, and blue channels, respectively. The weight for the red channel is 0.299, reflecting a moderate sensitivity of the human eye to red; the green channel has a weight of 0.587, the highest, indicating that the human eye is most sensitive to green; and the blue channel has a weight of 0.144, the lowest, indicating lower sensitivity to blue.
As shown in Figure 6, the grayscale values of low-illumination regions in the generated grayscale image are relatively low, and such regions are typically key areas that require enhancement. However, if the original grayscale image is directly used for feature guidance, the model may fail to effectively focus on these low-brightness areas, potentially leading to extreme image enhancement outcomes.
After the inverse mapping operation, regions that originally had low brightness correspond to higher grayscale values in the new feature representation, thereby enhancing the model’s focus on low-illumination areas and facilitating targeted enhancement in subsequent steps. To better integrate the grayscale features with the original image features, the fusion is performed using the formula shown in Equation (2):
X o u t = X i n × s i g m o i d ( A G r a y ) + X i n X i n × s i g m o i d ( A G r a y ) + s i g m o i d ( X i n ) i f i f m e a n ( X i n )   <   50 m e a n ( X i n ) 50
Here, A G r a y (also denoted as A G r a y in Equation (2)) represents the grayscale attention feature map generated from the RGB image through weighted grayscale conversion and inverse mapping. For consistency, all subsequent equations use the notation A G r a y to denote this variable.
During the fusion process, a sigmoid function is applied to the feature map for nonlinear adjustment. This moderately increases the brightness of low-illumination regions, smoothly enhancing the overall image brightness while preserving original detail information, thus avoiding artifacts and detail loss caused by excessive enhancement.
For high-exposure regions, the sigmoid function similarly compresses pixel values into a reasonable range, preventing further amplification of overexposure while retaining the image’s detailed structures.
(2)
Multi-scale grayscale image feature fusion
To further enhance the model’s ability to represent luminance information across different scales within an image, this paper proposes a multi-scale grayscale feature fusion strategy to guide the decoder in better restoring brightness and detail information. Specifically, since the grayscale image is generated based on the brightness distribution of the input image, its initial scale matches that of the input. However, the feature maps within the decoder typically exist at multiple spatial scales. Therefore, before fusing the grayscale image with the decoder’s feature maps, appropriate scale transformations of the grayscale image are necessary to ensure size compatibility. To address this issue, this paper designs a multi-angle, multi-scale downsampling strategy.
As shown in Figure 7, global max pooling is first applied to downsample the grayscale image, emphasizing regions with higher brightness to guide the model in effectively suppressing brightness in overexposed scenes. Second, global average pooling is used for downsampling to preserve and reinforce the overall brightness distribution, helping enhance darker regions in low-light scenarios. Finally, a 3 × 3 convolution with a stride of 2 is used to perform local brightness pattern downsampling, capturing finer-grained local luminance variations and ensuring the preservation of local brightness details during downsampling. Subsequently, the three grayscale feature maps obtained from different angles and scales are fused through weighted aggregation and then combined with the decoder’s feature maps, enabling fine-grained, adaptive brightness adjustment across the image. The fusion process of the multi-scale grayscale feature maps is described by Equation (3).
A G r a y - d o w n = M a x P o o l ( A G r a y ) + A v g P o o l ( A G r a y ) + C o n v 3 × 3 2 ( A G r a y )

2.3. Loss with Self-Feature Preservation and Brightness Consistency Constraint

In unsupervised image enhancement tasks, due to the lack of paired reference images, the effectiveness of model training heavily depends on the rationality and design of the loss functions. This section constructs unsupervised image loss constraints from three perspectives.
(1)
Adversarial Loss for Image Generation and Discrimination
The objective of the generator is to produce realistic images, and its loss function is typically used to measure the similarity between the generated images and real images. The traditional discriminator aims to maximize the probability of real images while minimizing the probability of generated images. However, this approach often causes the model to fall into local optima in order to evade detection by the discriminator.
Therefore, beyond simply considering the probability difference between real and generated images, it is necessary to further optimize the loss function to improve the robustness of the generator. Traditional binary cross-entropy loss can lead to instability during the model training process. To address this issue, this paper introduces the mean squared error (MSE) loss, jointly considering the relative probabilities of real and generated images [24], effectively enhancing training stability and improving the quality of generated images. The specific calculation formulas are given in Equations (4) and (5).
L D G l o b a l = E R e a l [ ( D ( x ) 1 ) 2 ] + E F a k e [ ( D ( G ( z ) ) ) 2 ]
L G G l o b a l = E F a k e [ ( D ( G ( z ) ) 1 ) 2 ] + E R e a l [ ( D ( x ) ) 2 ]
In the above formulas, D(x) represents the output of the discriminator for real data, and this expectation operation focuses on the loss associated with real data. G(z) denotes the data generated by the generator from noise, and D(G(z)) represents the output of the discriminator for generated data, with this expectation operation focusing on the loss associated with generated data.
Overall, the discriminator’s loss function considers two aspects of expected loss: one for real data and one for generated data. Similarly, the generator’s loss function also considers two aspects of expected loss, corresponding to the difference between generated and real data. This design enables the discriminator and generator to collaborate during training, gradually improving their respective performances. In addition, the model proposed in this paper introduces a local feature-based loss computation, where five randomly cropped patches are used for processing. The computation method for the local loss is consistent with that of the global loss and is denoted as L o c a l D L and L o c a l G L , respectively. The introduction of this local loss further enhances the model’s ability to handle fine details, improving both the local detail quality and the overall consistency of the generated images.
(2)
Image self-feature preservation perceptual loss
Although the adversarial training between the generator and discriminator effectively constrains the generator to learn how to produce more realistic enhanced images, the training in this paper is based on unpaired image enhancement. Due to the semantic and visual differences between the reference images and the generated images, it is difficult to impose fine-grained constraints using pixel-level annotations, which may lead to deviations in the generated results. To address this, this paper further introduces perceptual learning of high-level semantic features between the reference and generated images. By preserving the content consistency of high-level semantic features before and after enhancement, the method constrains the consistency of image content during the generation process. The specific calculation is given in Equation (6).
L S F P ( x ) = 1 W i , j H i , j x = 1 W i , j x = 1 H i , j ( x G ( x ) ) 2
Here, x denotes the feature map from the encoder stage, and G(x) represents the feature map from the decoder stage. W and H denote the width and height of the feature maps, respectively, while i and j refer to the i-th pooling layer and the j-th convolutional layer in the feature extraction process.
Unlike traditional pixel-level losses, this perceptual loss does not directly compare pixel-wise differences between the generated and target images. Instead, it performs the comparison in a high-level feature space. These features are extracted using a pretrained VGG network, which can capture high-level semantic information of images, such as shapes, textures, and edges. The intermediate convolutional layers are responsible for extracting object contours, structural information, and global features of the image. The features extracted at these layers simultaneously retain local structures and global semantics, enabling the perceptual loss to effectively preserve the geometric structure and content consistency of the images without relying on strict pixel-level comparison.
(3)
Image brightness consistency loss
In image enhancement tasks, luminance balance is crucial to the quality of the generated images. In the previously designed model structure, the grayscale attention mechanism was used to guide the model in learning and enhancing uneven luminance. However, relying solely on feature extraction layers for guidance is insufficient to guarantee the global brightness consistency of the generated images, as feature extraction layers primarily focus on high-level semantic information while low-level semantic features also need to be addressed. To solve this problem, this paper further designs a luminance guidance mechanism at the loss function level. The final generated image is used as a source of low-level semantic features to guide global brightness balancing, aiming to ensure that the overall luminance of the generated image matches that of the unpaired target image.
First, global luminance comparison is performed, focusing on the overall luminance trend of the image rather than precise matching of individual pixel brightness. This process ensures that the global luminance of the generated image remains consistent with that of the target image. The specific calculation formula is given in Equation (7).
L G l o b a l _ b r i g h t n e s s = | A G r a y ( G ( x A ) ) A G r a y ( x B ) |
Here, A G r a y ( G ( x A ) ) and A G r a y ( x B ) represent the mean grayscale values of the generated image and the target image, respectively.
Finally, an adaptive luminance constraint is introduced. Unlike methods that forcibly align every generated image with the reference image, the adaptive luminance constraint sets an adaptive target for each image’s luminance distribution. By calculating the dynamic range of the image’s luminance (i.e., the minimum and maximum luminance values), the generated image’s luminance range is matched to that of the reference image. The calculation formula is provided in Equation (8)
L d y n a m i c _ r a n g e = | ( max ( A G r a y ( x A ) ) min ( A G r a y ( x A ) ) ) ( max ( A G r a y ( x B ) ) min ( A G r a y ( x B ) ) ) |
This approach does not require the luminance of every pixel in the generated image to match the reference image exactly. Instead, it focuses on aligning the overall dynamic range of brightness. By constraining the dynamic range of the generated image’s luminance to match that of the reference image, it effectively prevents the generated images from being excessively bright or dark, ensuring the naturalness of the luminance distribution. The corresponding calculation formula is shown in Equation (9)
L T o t a l = L G G l o b a l + L G L o c a l + L S F P + L G l o b a l e _ b r i g h t n e s s + L d y n a m i c _ r a n g e
By constructing loss functions based on the structural features and luminance information of the generated images, this approach ensures that the enhanced images preserve their original details and luminance characteristics while improving brightness.

3. Experiments and Analysis

3.1. Experimental Environment and Data Description

To facilitate the recognition of various fault types in transmission line insulators, this study constructs a dedicated dataset of 4615 images. Each image may contain one or more target instances, including multiple instances of the same fault type. The dataset comprises four categories—normal insulator, self-exploded suspension insulator, broken insulator, and flashover insulator—labeled as Insulator, Defect, Broke, and Flashover, respectively. Figure 8 presents the number of samples in each category. Given the two sets of comparison experiments, the entire dataset is first evenly split. One subset is used for image enhancement training, and the trained model is then evaluated on the other subset. The subsequently enhanced images are finally employed for object detection and localization experiments.
The entire dataset is first used for image enhancement training. The resulting enhanced dataset is then divided into training, validation, and test sets with an 8:1:1 ratio. All experiments are conducted on Ubuntu 20.04 using Python 3.8.0 and CUDA 11.2, in a PyTorch 1.8 environment accelerated by two NVIDIA GeForce RTX 3090-24G GPUs.(The equipment was purchased from Xiandao Feng Information Technology Co., Ltd.) During the enhancement stage, the model is trained for 200 epochs with a batch size of 8, using 512 × 512 resolution images. Optimization adopts a stochastic gradient descent (SGD) strategy with an initial learning rate of 0.01, a minimum of 0.0001, momentum of 0.937, and weight decay of 0.0005. Early stopping is applied if accuracy does not improve for 20 epochs.

3.2. Experimental Index

The performance of the proposed model is evaluated using quantitative metrics tailored to both image enhancement and object detection tasks. For image enhancement, peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) are employed to measure brightness balance and structural fidelity. For object detection, average precision at IoU threshold 0.5 (AP50) and mean average precision (mAP) are used to evaluate detection accuracy across categories. The corresponding calculation formulas are provided below.
M S E = 1 I J i = 1 I j = 1 J [ f ( i , j ) F ( i , j ) ] 2
P S N R = 10 log 10 ( ( 2 n 1 ) 2 M S E )
Among them, IJ represents the length and width of the image, n represents the side length of the image, and f (i, j) and F (i, j) represent the original image and the enhanced image, respectively.
S S I M ( X , Y ) = L ( X , Y ) C ( X , Y ) S ( X , Y ) = ( 2 u X u Y + C 1 ) ( 2 σ X Y + C 2 ) ( u X 2 + u Y 2 + C 1 ) ( σ X 2 + σ Y 2 + C 2 )
Among them, L(X,Y), C(X,Y), and S(X,Y) are brightness, contrast, and structural difference functions, respectively; X and Y are the original image and enhanced image, respectively, u X and u Y , σ X 2 and σ Y 2 , and σ Y X are the average value, variance, and covariance of X and Y, respectively; and C1 and C2 are dynamic constants.

3.3. Transmission Line Power Component Defect Enhancement Data Description

To fully enhance the model’s generalization ability and robustness under complex lighting conditions, this paper constructs a synthetic dataset specifically designed to simulate extreme lighting conditions encountered during transmission line inspection. During the image enhancement training process, the entire original dataset is used for simulated extreme lighting condition enhancement, with the synthesized dataset being split in a 1:1 ratio for model training and generalization performance testing.
As shown in Figure 9, Specifically, to simulate extreme lighting scenarios that may be encountered in inspection tasks, this dataset randomly sets all image samples to either overexposure or low-illumination scenes, without retaining any original images with normal lighting conditions. Overexposed samples are synthesized by increasing the overall image brightness, effectively simulating overexposure phenomena caused by direct sunlight or component reflections during transmission line inspections. Low-illumination samples are synthesized by decreasing the overall image brightness to simulate situations such as cloudy days, evening periods, or environments with occlusions, where insufficient brightness may occur. By randomly generating overexposed and low-illumination samples, the model is ensured to undergo comprehensive training under a variety of complex lighting conditions, enabling it to learn a more comprehensive and rich set of visual features.

3.4. Image Enhancement Experimental Analysis

3.4.1. Image Enhancement Ablation Experiment Analysis

We compared the performances of several object detection models under complex lighting conditions, aiming to validate the effectiveness of our designed data augmentation method in improving model performance. We used a dataset processed for lighting conditions, as well as complex lighting data generated by our designed model enhancement.
First, we conduct ablation experiments on image enhancement to evaluate the specific contribution of each component to the performance. The experimental results are summarized in Table 1.
The three improved modules designed in this paper—backbone network (Backbone), gray attention mechanism (Gray-Attention), and loss function constraints (Loss)—all have a significant impact on the performance of the image enhancement task. The following is a detailed analysis:
Single Module Enhancement Experiments. Backbone Only: PSNR improves from 7.73 to 12.57, and SSIM from 0.43 to 0.78. This substantial gain indicates that the backbone network greatly enhances feature extraction, resulting in improved structural fidelity and image quality. Gray-Attention Only: PSNR rises to 14.44, and SSIM to 0.81. This demonstrates that the gray attention mechanism effectively addresses uneven illumination issues, leading to a marked improvement in enhancement quality. Loss Only: PSNR improves to 11.65, and SSIM to 0.74. Although the improvement is less pronounced, it positively influences training stability and performance optimization.
Ablation Experiments. Backbone + Loss: PSNR reaches 16.51, and SSIM rises to 0.84, indicating that the combined effect improves detail restoration and structural integrity. Backbone + Gray-Attention: PSNR attains 17.37, and SSIM reaches 0.83, highlighting that the synergy between the backbone and attention mechanism effectively mitigates lighting discrepancies, enhancing overall image quality. Gray-Attention + Loss: PSNR reaches 16.54, and SSIM 0.84, underscoring that this combination further refines processing under uneven lighting compared to either module alone. All Three Combined (Backbone + Gray-Attention + Loss): PSNR achieves 18.41, and SSIM 0.85, revealing a strong synergistic effect. Integrating all three modules yields the highest enhancement performance, improving detail recovery and structural quality.
Overall Summary. Each module addresses a distinct enhancement aspect. The backbone network improves feature extraction, the gray attention mechanism corrects uneven illumination, and the loss function enhances training stability and generalization. When integrated, these modules achieve a synergistic effect that maximizes image enhancement accuracy and quality.

3.4.2. Experiment on Synthetic Image Enhancement with Uneven Illumination

The proposed target enhancement module, which preprocesses the original images, is designed to improve the subsequent transmission line fault detection module’s accuracy. In its initial stage, this module moderately enhances image quality and identifies the insulator region. To evaluate its effectiveness, five advanced image enhancement approaches—Zero-DCE++ [25], CutGAN [26], StyleGAN [27], RetinexNet, EnlightenGAN,—are compared against the proposed target enhancement module. All experiments are conducted using the self-constructed dataset, focusing on both low-illumination and high-exposure images. Comparative results are presented in Table 2 and Table 3. In these tables, the first and last rows show the input images and their corresponding ground-truth label images, while the intermediate rows display results from the various enhancement models.
(1)
Enhanced performance analysis for low-light scenes
In low-illumination scenarios, all compared models increase image brightness to varying degrees. However, the unsupervised Zero-DCE++ approach, lacking a brightness reference, suffers from severe color distortion and inconsistency despite effectively brightening the image. RetinexNet also introduces color distortion and heightened noise, resulting in considerable detail loss. EnlightenGAN and CutGAN produce comparatively limited brightness enhancements, leaving some regions insufficiently illuminated. While StyleGAN performs better overall, it still produces uneven brightness, with transmission lines appearing overly bright and backgrounds remaining dark. In contrast, the proposed algorithm substantially improves brightness, contrast, and color saturation, yielding clearer images that facilitate subsequent fault detection. Visually, these results validate the proposed model’s effectiveness in enhancing low-illumination images. The specific renderings are shown in Table 2 below.
Table 2. Low-light visualization results analysis.
Table 2. Low-light visualization results analysis.
ModelSensors 25 07071 i001
ZeroDCE++Sensors 25 07071 i002
RetinexNetSensors 25 07071 i003
EnlightenGANSensors 25 07071 i004
CutGANSensors 25 07071 i005
StyleGANSensors 25 07071 i006
OursSensors 25 07071 i007
LablesSensors 25 07071 i008
(2)
Enhanced performance analysis for high exposure scenes
In high-exposure scenarios, each compared model reduces image brightness, yet their quality improvements differ. Zero-DCE++, as in the low-illumination case, effectively adjusts brightness but introduces severe color distortion and inconsistency. RetinexNet over-darkens the image, leading to over-processing. EnlightenGAN and CutGAN, originally designed for low-illumination enhancement, are less effective for high-exposure images, producing results that gravitate toward a fixed brightness value. While StyleGAN outperforms these methods overall, it still suffers from uneven brightness distribution, with excessively bright transmission lines contrasting with dim backgrounds, as seen in the third image. By contrast, the proposed algorithm mitigates high-exposure effects more effectively, offering balanced brightness and improved clarity. From a visual standpoint, these results confirm the proposed model’s efficacy in enhancing high-exposure images.
Table 3. High-exposure visual results analysis.
Table 3. High-exposure visual results analysis.
ModelSensors 25 07071 i009
ZeroDCE++Sensors 25 07071 i010
RetinexNetSensors 25 07071 i011
EnlightenGANSensors 25 07071 i012
CutGANSensors 25 07071 i013
StyleGANSensors 25 07071 i014
OursSensors 25 07071 i015
LablesSensors 25 07071 i016

3.4.3. Experimental Test of Image Enhancement on Actual Power Transmission Lines

In the previous experiments, we used synthetic images with uneven lighting to evaluate the model’s performance. To further validate the model’s performance in real-world scenarios, we introduced actual uneven lighting data from real scenes for testing.
(1)
Comparison of actual image detection results
We conducted a comprehensive analysis of various YOLO versions on complex lighting datasets, focusing on three key dimensions: baseline accuracy without the Image Enhancement Module (IEM), accuracy improvements after introducing IEM, and detailed improvements across different detection tasks. The analysis results are summarized in Table 4.
Baseline Accuracy Without IEM: Multiple YOLO variants (v5, v7, v8, v9, and v10) were evaluated on tasks such as insulator, defect, flashover, and damage detection. Earlier versions (e.g., YOLOv5) performed relatively poorly, with YOLOv5s achieving an AP of 94.3% for insulator detection but only 64.9% for flashover detection. As model versions advanced, performance improved, particularly for complex detection tasks. YOLOv8 exhibited the best performance, indicating its suitability for complex lighting conditions in our dataset.
Accuracy Analysis After Introducing IEM: After introducing the Image Enhancement Module (IEM), all YOLO versions showed general improvements in detection performance. Notably, IEM significantly boosted detection accuracy in tasks like insulator and defect detection. Additionally, YOLOv8m+IEM achieved an mAP of 89.8%, surpassing the non-enhanced model’s mAP of 87.8%.
Detailed Accuracy Improvement Analysis: The enhancements varied by detection task. For insulator detection, all YOLO models benefited from IEM. YOLOv5s improved by 0.8 points in mAP, while YOLOv7Tiny saw the largest gain of 4.2 points. The YOLOv8 series improved by 1.7 and 2.0 points, the YOLOv9 series by 0.2 and 1.0 points, and the YOLOv10 series by 1.6 and 2.4 points. These consistent improvements confirm the efficacy of the proposed image enhancement approach.
Although the overall mAP shows an improvement after introducing the illumination enhancement module (IEM), a slight decline in the AP of certain categories (e.g., the flashover class in YOLOv5) can still be observed. This phenomenon may result from the illumination enhancement altering the texture or brightness distribution of specific samples. In particular, for categories like flashover, whose visual features are mainly represented by high-brightness or overexposed regions, excessive enhancement may reduce the feature contrast between target and background, leading to a minor degradation in detection accuracy. Moreover, the IEM focuses on global illumination correction rather than fine-grained texture recovery, which can slightly affect models that rely heavily on local feature cues.
(2)
Visual comparison of actual image detection results
Table 5 presents enhanced and detected results using the proposed model on real-world images. In the first row, the original image appears overexposed, while the enhanced version more effectively corrects brightness and yields higher detection confidence. In the second through fourth rows, the images suffer from underexposure, causing some insulator damage to go undetected. After enhancement, brightness is improved, reducing missed detections. In the fifth row, flashover issues on insulators are present against a darker background; the proposed model’s enhancement clarifies the image, increasing flashover detection accuracy. In the sixth row, an overly bright background interferes with flashover detection, resulting in missed cases using the original model. By contrast, the proposed model appropriately reduces brightness, enhancing the flashover detection rate. Overall, these real-world results further confirm the effectiveness of the proposed image enhancement model.

4. Discussion

Although the proposed unsupervised image enhancement framework has demonstrated considerable effectiveness for multiple fault detection of insulators under complex lighting conditions, this study has certain limitations that point toward valuable future research directions.
Firstly, in a minority of extreme over-exposed or low-light scenarios, the enhanced images may occasionally exhibit slight color casts. This indicates that achieving better preservation of natural color fidelity while balancing luminance remains an area for further investigation.
Secondly, the dataset constructed for this study suffers from class imbalance, with “normal” insulator samples significantly outnumbering specific fault categories such as “flashover.” While the image enhancement preprocessing improved the overall performance of the detection model, this skewed data distribution might still limit the model’s ability to capture subtle features of the minority fault classes.
Finally, the current work focuses specifically on insulators. The core idea underlying our approach—improving the performance of subsequent detection tasks via unsupervised image enhancement—holds the potential to be transferred to the defect detection of other power equipment (e.g., surge arresters, instrument transformers). Verifying and extending the generalizability of this framework constitutes an important direction for future work.

5. Conclusions

To achieve insulator fault detection in complex environments with uneven lighting, especially under high-exposure and low-light conditions, this paper proposes several innovative designs for efficient lighting enhancement strategies. The following conclusions can be drawn:
  • The proposed enhancement strategy can adaptively enhance images with imbalanced lighting under the condition of unpaired data. This method effectively improves the lighting balance of the generated images, preserving local details and global consistency. As a result, the enhanced images are more suitable for subsequent detection and recognition tasks. The PSNR of the enhanced images increases from 7.73 to 18.41, and the SSIM improves from 0.42 to 0.85.
  • Furthermore, validation on a real-world dataset shows that the target detection algorithm, after image enhancement, significantly outperforms direct detection on the raw images. The average precision of the insulator defect detection dataset improves by 0.2% to 4.2% across different models.
  • Finally, the insulator fault dataset constructed in this paper faces a class imbalance issue, which results in high detection accuracy for insulator targets but is also affected by complex backgrounds. Therefore, further research will continue to address the challenges posed by complex backgrounds and targets in complex scenes.

Author Contributions

Conceptualization, J.G. and Y.L.; Methodology, J.G.; Software, M.H.; Validation, M.H.; Resources, G.H. and Y.L.; Data curation, M.H.; Supervision, G.H. and K.L.; Project administration, L.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by The National Key R&D Program of China (No. 2023YFB2406900). The sponsor of the funds is L.Q.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Embargo on data due to commercial restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Meng, Z.; Xu, S.; Wang, L.; Gong, Y. Defect object detection algorithm for electroluminescence image defects of photovoltaic modules based on deep learning. Energy Sci. Eng. 2022, 10, 800–813. [Google Scholar] [CrossRef]
  2. Li, D.; Ren, H.; Wang, G.; Wang, S.; Wang, W.; Du, M. Coal gangue detection and recognition method based on multiscale fusion lightweight network SMS-YOLOv3. Energy Sci. Eng. 2023, 11, 1783–1797. [Google Scholar] [CrossRef]
  3. Shan, P.F.; Yang, T.; Wu, X.C.; Sun, H.Q. A research study of lightweight state perception algorithm based on improved YOLOv5s-Tiny for fully mechanized top-coal caving mining. Energy Sci. Eng. 2024, 12, 2864–2878. [Google Scholar] [CrossRef]
  4. He, M.; Qin, L.; Deng, X.; Liu, K. MFI-YOLO: Multi-fault insulator detection based on an improved YOLOv8. IEEE Trans. Power Deliv. 2023, 39, 168–179. [Google Scholar] [CrossRef]
  5. Tao, X.; Zhang, D.; Wang, Z.; Liu, X.; Zhang, H.; Xu, D. Detection of Power Line Insulator Defects Using Aerial Images Analyzed With Convolutional Neural Networks. EEE Trans. Syst. Man Cybern. Syst. 2020, 50, 1486–1498. [Google Scholar] [CrossRef]
  6. Jiang, D.; Cao, Y.; Yang, Q. Weakly-supervised learning based automatic augmentation of aerial insulator images. Expert Syst. Appl. 2024, 242, 122739. [Google Scholar] [CrossRef]
  7. Yang, Z.; Xu, Z.; Wang, Y. Bidirection-Fusion-YOLOv3: An Improved Method for Insulator Defect Detection Using UAV Image. IEEE Trans. Instrum. Meas. 2022, 71, 3521408. [Google Scholar] [CrossRef]
  8. Han, G.; He, M.; Zhao, F.; Xu, Z.; Zhang, M.; Qin, L. Insulator detection and damage identification based on improved lightweight YOLOv4 network. Energy Rep. 2021, 7, 187–197. [Google Scholar] [CrossRef]
  9. Zhou, M.; Li, B.; Wang, J.; He, S. Fault Detection Method of Glass Insulator Aerial Image Based on the Improved YOLOv5. IEEE Trans. Instrum. Meas. 2023, 72, 5012910. [Google Scholar] [CrossRef]
  10. Woo, S.; Park, J.; Lee, J.Y. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  11. Hao, K.; Chen, G.; Zhao, L.; Li, Z.; Liu, Y.; Wang, C. An Insulator Defect Detection Model in Aerial Images Based on Multiscale Feature Pyramid Network. IEEE Trans. Instrum. Meas. 2022, 71, 3522412. [Google Scholar] [CrossRef]
  12. Yang, L.; Zhang, R.Y.; Li, L. Simam: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Online, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
  13. Zhao, W.; Xu, M.; Cheng, X.; Zhao, Z. An Insulator in Transmission Lines Recognition and Fault Detection Model Based on Improved Faster RCNN. IEEE Trans. Instrum. Meas. 2021, 70, 5016408. [Google Scholar] [CrossRef]
  14. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  15. Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  16. Zhang, X.; Zhang, Y.; Liu, J.; Zhang, C.; Xue, X.; Zhang, H.; Zhang, W. InsuDet: A Fault Detection Method for Insulators of Overhead Transmission Lines Using Convolutional Neural Networks. IEEE Trans. Instrum. Meas. 2021, 70, 5018512. [Google Scholar] [CrossRef]
  17. Yuan, Z.; Zeng, J.; Wei, Z.; Jin, L.; Zhao, S.; Liu, X.; Zhang, Y.; Zhou, G. CLAHE-Based Low-Light Image Enhancement for Robust Object Detection in Overhead Power Transmission System. IEEE Trans. Power Deliv. 2023, 38, 2240–2243. [Google Scholar] [CrossRef]
  18. Liu, W.; Ren, G.; Yu, R.; Guo, S.; Zhu, J.; Zhang, L. Image-adaptive YOLO for object detection in adverse weather conditions. Proc. AAAI Conf. Artif. Intell. 2022, 36, 1792–1800. [Google Scholar] [CrossRef]
  19. Zhang, H.; Xiao, L.; Cao, X.; Foroosh, H. Multiple adverse weather conditions adaptation for object detection via causal intervention. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 46, 1742–1756. [Google Scholar] [CrossRef] [PubMed]
  20. Zhang, Z.D.; Zhang, B.; Lan, Z.C.; Liu, H.C.; Li, D.Y.; Pei, L.; Yu, W.X. FINet: An Insulator Dataset and Detection Benchmark Based on Synthetic Fog and Improved YOLOv5. IEEE Trans. Instrum. Meas. 2022, 71, 6006508. [Google Scholar] [CrossRef]
  21. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  22. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  23. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
  24. Mao, X.; Li, Q.; Xie, H.; Lau, Y.K.R.; Wang, Z.; Stephen, P.S. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
  25. Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef] [PubMed]
  26. Park, T.; Efros, A.A.; Zhang, R.; Zhu, J.Y. Contrastive learning for unpaired image-to-image translation. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 319–345. [Google Scholar]
  27. Karras, T.; Aittala, M.; Laine, S.; Härkönen, E.; Hellsten, J.; Lehtinen, J.; Aila, T. Alias-Free Generative Adversarial Networks. Adv. Neural Inf. Process. Syst. 2021, 34, 852–863. [Google Scholar]
Figure 1. Overall algorithm structure diagram.
Figure 1. Overall algorithm structure diagram.
Sensors 25 07071 g001
Figure 2. Backbone feature extraction module.
Figure 2. Backbone feature extraction module.
Sensors 25 07071 g002
Figure 3. Backbone feature extraction structure.
Figure 3. Backbone feature extraction structure.
Sensors 25 07071 g003
Figure 4. Discriminator network structure. (a) Global discrimination. (b) Local discrimination.
Figure 4. Discriminator network structure. (a) Global discrimination. (b) Local discrimination.
Sensors 25 07071 g004
Figure 5. Comparison of normal lighting images and complex lighting images. (a) Normal brightness. (b) Low illumination. (c) High exposure. (d) HSV value of normal brightness. (e) HSV value at low illumination. (f) HSV value of high exposure.
Figure 5. Comparison of normal lighting images and complex lighting images. (a) Normal brightness. (b) Low illumination. (c) High exposure. (d) HSV value of normal brightness. (e) HSV value at low illumination. (f) HSV value of high exposure.
Sensors 25 07071 g005aSensors 25 07071 g005b
Figure 6. The grayscale and inverse grayscale mapping of an unbalanced brightness image. (a) High exposure image. (b) Grayscale image mapping with high exposure. (c) High exposure reverse grayscale image mapping. (d) Low exposure image. (e) Low-exposure grayscale mapping. (f) Low exposure reverse grayscale mapping.
Figure 6. The grayscale and inverse grayscale mapping of an unbalanced brightness image. (a) High exposure image. (b) Grayscale image mapping with high exposure. (c) High exposure reverse grayscale image mapping. (d) Low exposure image. (e) Low-exposure grayscale mapping. (f) Low exposure reverse grayscale mapping.
Sensors 25 07071 g006
Figure 7. Grayscale attention feature fusion structure.
Figure 7. Grayscale attention feature fusion structure.
Sensors 25 07071 g007
Figure 8. Data distribution chart.
Figure 8. Data distribution chart.
Sensors 25 07071 g008
Figure 9. Comparison of images with different brightness levels. (a) Original image. (b) Overexposed image. (c) Low-light image.
Figure 9. Comparison of images with different brightness levels. (a) Original image. (b) Overexposed image. (c) Low-light image.
Sensors 25 07071 g009
Table 1. Ablation Experiment Analysis on Synthetic Data.
Table 1. Ablation Experiment Analysis on Synthetic Data.
BackboneGray-AttentionLossPSNR (dB)SSIM
---7.730.43
--12.570.78
--14.440.81
--11.650.74
-16.510.84
-17.370.83
-16.540.84
18.410.85
Table 4. Accuracy comparison of insulator and defect datasets before and after enhancement.
Table 4. Accuracy comparison of insulator and defect datasets before and after enhancement.
ModelAP/%mAP/%
InsulatorDefectFlashoverBroke
YOLOv5s94.389.864.973.680.6
YOLOv5s + IEM94.691.363.876.081.4+0.8
YOLOv5m94.992.373.186.486.7
YOLOv5m + IEM96.090.872.787.786.8+0.1
YOLOv7Tiny90.580.854.063.672.2
YOLOv7Tiny + IEM95.791.868.573.579.4+4.2
YOLOv8s95.090.965.378.382.3
YOLOv8s + IEM94.891.970.978.384.0+1.7
YOLOv8m95.392.974.488.787.8
YOLOv8m + IEM95.793.277.992.489.8+2.0
YOLOv9s94.388.161.175.379.7
YOLOv9s + IEM93.886.360.179.379.9+0.2
YOLOv9m95.192.772.588.787.3
YOLOv9m + IEM95.793.474.789.388.3+1.0
YOLOv10s92.888.061.367.777.4
YOLOv10s + IEM93.887.962.871.479.0+1.6
YOLOv10m93.587.264.769.378.7
YOLOv10m + IEM93.689.867.873.181.1+2.4
Table 5. Visual comparison of actual image detection results.
Table 5. Visual comparison of actual image detection results.
Original ImageOriginal Image Detection ResultsEnhanced ImagesEnhanced Image Detection Results
Sensors 25 07071 i017Sensors 25 07071 i018
Sensors 25 07071 i019Sensors 25 07071 i020
Sensors 25 07071 i021Sensors 25 07071 i022
Sensors 25 07071 i023Sensors 25 07071 i024
Sensors 25 07071 i025Sensors 25 07071 i026
Sensors 25 07071 i027Sensors 25 07071 i028
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, J.; Han, G.; He, M.; Li, Y.; Qin, L.; Liu, K. An Unsupervised Image Enhancement Framework for Multiple Fault Detection of Insulators. Sensors 2025, 25, 7071. https://doi.org/10.3390/s25227071

AMA Style

Guo J, Han G, He M, Li Y, Qin L, Liu K. An Unsupervised Image Enhancement Framework for Multiple Fault Detection of Insulators. Sensors. 2025; 25(22):7071. https://doi.org/10.3390/s25227071

Chicago/Turabian Style

Guo, Jiaxin, Gujing Han, Min He, Yu Li, Liang Qin, and Kaipei Liu. 2025. "An Unsupervised Image Enhancement Framework for Multiple Fault Detection of Insulators" Sensors 25, no. 22: 7071. https://doi.org/10.3390/s25227071

APA Style

Guo, J., Han, G., He, M., Li, Y., Qin, L., & Liu, K. (2025). An Unsupervised Image Enhancement Framework for Multiple Fault Detection of Insulators. Sensors, 25(22), 7071. https://doi.org/10.3390/s25227071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop