Next Article in Journal
Learning-Aided Adaptive Robust Control for Spiral Trajectory Tracking of an Underactuated AUV in Net-Cage Environments
Previous Article in Journal
Study on the Deformation Energy Evolution Characteristics and Instability Prediction Model of Weak Surrounding Rock in Tunnels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unsupervised Detection of Surface Defects in Varistors with Reconstructed Normal Distribution Under Mask Constraints

College of Communication and Information Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(19), 10479; https://doi.org/10.3390/app151910479
Submission received: 22 August 2025 / Revised: 20 September 2025 / Accepted: 23 September 2025 / Published: 27 September 2025

Abstract

Surface defect detection serves as one of the crucial auxiliary components in the quality control of varistors, and it faces real challenges such as the scarcity of defect samples, high labelling cost, and insufficient a priori knowledge, which makes unsupervised deep learning-based detection methods attract attention. However, existing unsupervised models have problems such as inaccurate defect localisation and a low recognition rate of subtle defects in the detection results. To solve the above problems, an unsupervised detection method (Var-MNDR) is proposed to reconstruct the normal distribution of surface defects of varistors under mask constraints. Firstly, on the basis of colour space as well as morphology, an image preprocessing method is proposed to extract the main body image of the varistor, and a mask-constrained main body pseudo-anomaly generation strategy is adopted so that the model focuses on the texture distribution of the main body region of the image, reduces the model’s focus on the background region, and improves the defect localisation capability of the model. Secondly, Kolmogorov–Arnold Networks (KANs) are combined with the U-Network (U-Net) to construct a segmentation sub-network, and the Gaussian radial basis function is introduced as the learnable activation function of the KAN to improve the model’s ability to express the image features, so as to realise more accurate defect detection. Finally, by comparing the four unsupervised defect detection methods, the experimental results prove the superiority and generalisation of the proposed method.

1. Introduction

As one of the important components of electronic components, varistors play an important role in protecting circuits and stabilising voltage, which is the ‘safety valve’ of the circuit system. However, due to the complexity of the production process, varied and irregularly distributed defects can easily appear on the surface of varistors in actual production [1], and these defects will, to a certain extent, adversely affect their performance and service life. Traditional manual inspection methods generally have a high leakage rate, low detection efficiency and other problems, and it is difficult for them to meet the high requirements of quality and efficiency of modern production [2]. Therefore, a good product surface defect detection method can not only effectively improve product quality and production efficiency, but also significantly reduce the input of labour costs, while providing strong support for the intelligent transformation of the production process.
With the rapid development of industrial technology and the continuous expansion of application scenarios, traditional machine vision inspection methods, by virtue of their efficient data processing capabilities, have shown significant advantages in achieving high-speed and continuous operation, providing a strong support for the improvement of modern industrial production efficiency. However, these methods mainly rely on manually designed algorithms and domain a priori knowledge in the feature extraction process [3], and although they can achieve better results in a specific type of varistors in a specific scenario, there are obvious limitations in their generalisation ability and environmental adaptability, especially in industrial field environments with large fluctuations in lighting conditions or complex background interference [4]. The detection accuracy of the traditional machine vision system tends to show a significant degradation; it is difficult to meet the strict requirements of modern intelligent manufacturing on the detection stability.
In order to overcome the many limitations of traditional machine vision inspection methods in practical applications, deep learning has emerged to bring new breakthroughs in the field of defect detection. Supervised learning [5], as an important method of deep learning, can accurately establish the mapping relationship between defect features and category labels through the training of a large amount of labelled data, so as to achieve high-precision defect detection. However, this method also has some significant shortcomings. First, the acquisition of high-quality labelled data requires a large amount of human and material costs, and subjective errors will inevitably be introduced in the labelling process; second, the generalisation performance of the model is highly dependent on the distributional completeness of the training data, and when facing out-of-distribution samples [6], the model is prone to misjudgement and the omission of judgement. More critically, in some specific industrial scenarios, the scarcity problem of defective samples further restricts the scalability of supervised learning methods. In contrast, the unsupervised learning method reconstructs defective samples as defect-free samples by constructing a feature representation space based on defect-free samples [7], and compares the differences before and after the reconstruction to achieve the precise localisation of defective regions. The significant advantage of this approach is that there is no need to predefine the defect categories, effectively circumventing the cumbersome sample labelling process.
In the defect detection task, the Transformer-based mask image modelling model masks the defective region by the target mask [8], so that the model reconstructs the defective region based on the features of the defect-free region around the target mask, which is prone to target mask residuals, i.e., mask artifacts, during the reconstruction process, leading to difficulties in realizing high-precision defect localisation by comparing the differences between the before and after reconstructed images. The CNN mainly relies on the convolution operation of the local sensory field, which is difficult to use to effectively model the long-distance dependency relationship between pixels, making the model’s ability to perceive the global semantic information weak, and the defect segmentation appears to have unclear segmentation boundaries. On the other hand, the non-periodic random texture distribution on the surface of the varistors is coupled with the complex background interference, and the CNN’s intrinsic translation invariant feature extraction mechanism will misidentify the discretely distributed defective features as the normal texture, and this feature confusion makes it difficult for the model to accurately distinguish between defective and normal regions during defect segmentation, resulting in the segmentation results of the mis-segmentation phenomenon. The inherently multi-step iterative denoising mechanism of diffusion models incurs substantial computational complexity and memory overhead [7], posing significant challenges for resource-constrained industrial deployments. Secondly, model performance is highly dependent on substantial high-quality training data to accurately learn the distribution of normal data. In industrial scenarios with limited training samples, overfitting is prone to occur. Concurrently, the random sampling that is inherent in diffusion models may introduce unnecessary noise or uncertainty, resulting in poor stability of reconstruction outcomes. This instability can lead to false detections when comparing pre- and post-reconstruction differences.
Based on the above problem, an unsupervised detection method for surface defects of varistors with a reconstructed normal distribution under mask constraints is proposed. In order to make the model focus on the texture distribution of the main region of the image and reduce the model’s focus on the background region, the image pre-processing method of removing the background and extracting the main body image of the varistor is researched and obtained on the basis of the colour space as well as the morphology, and the mask-constrained main body pseudo-anomaly generating strategy is adopted, which can effectively alleviate the defects’ residual phenomenon in the reconstruction result of the reconstruction model and improve the model localization ability. The KAN is combined with the U-Net to construct a segmentation sub-network, and the Gaussian radial basis function is introduced as the learnable activation function of the KAN to enhance the model’s ability to express image features, thus revealing the differences between the original image and reconstructed image more accurately, and realising a more accurate recognition of subtle defects.
The main contributions of this paper are as follows:
(a) On the basis of colour space as well as morphology, an image pre-processing method is proposed to extract the body image of the varistors, and a mask-constrained body pseudo-anomaly generation strategy is adopted to enable the model to focus on the texture distribution of the main region of the image, reduce the model’s focus on the background region, alleviate the defective residual phenomenon that occurs in the reconstruction results of the reconstruction network, and enhance the model’s localisation capability.
(b) Combine the KAN with the U-Net to construct a segmentation sub-network, and introduce the Gaussian radial basis function (GRBF) as the learnable activation function of the KAN to enhance the model’s ability to express image features, so as to realise more accurate recognition of subtle defects.
(c) A multi-colour and multi-specification varistor dataset is established to evaluate the performance of the proposed unsupervised varistor defect detection model. The experimental results show that the proposed method has good superiority and generalisation in the varistor defect detection task.
This article is organized structurally as follows.
Section 2 describes image reconstruction methods and the KAN architecture methods. Section 3 introduces an unsupervised detection of surface defects in varistors with reconstructed normal distribution under mask constraints, which is used for detecting surface defects on varistors. Section 4 describes the dataset collection equipment, datasets, training details, and evaluation metrics, and discusses the results of each experiment. Section 5 summarises the experimental results and outlines future research directions.

2. Related Work

2.1. Image Reconstruction Methods

Unsupervised defect detection methods mainly rely on the image reconstruction idea [9], the core of which lies in capturing the distribution characteristics of the defect-free samples through the constructed model, learning the intrinsic laws and features of the data from the defect-free samples, and then using these features to reconstruct the defective image into a defect-free image, and ultimately locating the defective region by comparing the differences between the pre- and post-reconstruction images. Ximiao Zhang [10] et al. proposed an anomaly detection method based on a feature selection network, which is based on a diffusion process synthesis strategy to simulate the distribution of real defective samples, anomaly-aware feature selection to select a representative and distinguishable subset of pre-trained features, and reconstruction residual selection to adaptively select distinguishable residuals, which achieves a comprehensive identification of anomalous regions at multiple granularity levels. Chun-Liang Li [11] et al. proposed a synthetic anomaly generation strategy based on image cutting and pasting. The strategy constructs pseudo anomaly samples with spatial continuity by randomly selecting image sub-regions to be reconstructed after affine transformation. Combined with a contrast learning framework, this method effectively guides the model to learn the features and structures of normal samples. Zhang X [12] et al. proposed a model combining a variational self-encoder and adversarial training to generate a potential spatial anomaly localisation map by directing the attention mechanism to focus on normal regions. Wang C [13] et al. proposed an improved multiscale U-Net architecture, which extends the receptive field by introducing a null convolutional layer to extend the sensory field and design a cross-scale feature pyramid fusion mechanism to achieve the organic combination of features at different abstraction levels, which significantly improves the defect detection capability of the model.

2.2. Based on the KAN Architecture Methods

Ziming Liu et al. [14] proposed a novel neural network structure called a Kolmogorov–Arnold Network (KAN); unlike the fixed activation functions of traditional MLPs, KANs use learnable activation functions and reconstruct the weighting parameters as univariate spline functions, and this design significantly enhances the model representation capability. In the field of computer vision, Alexander Dylan Bodner et al. [15] constructed parameter-efficiency optimised convolutional layers by incorporating the non-linear activation functions of KANs into convolutional operations. On the MNIST and Fashion-MNIST datasets, convolutional KANs maintain similar accuracy to traditional convolutional neural networks using half the number of parameters, significantly reducing the number of parameters. The KANICE architecture was proposed by Md Meftahul Ferdaus et al. [16] by integrating interactive convolutional blocks and KAN linear layers that fully utilise the generic approximation capability of KANs and the adaptive feature learning capability of ICBs. The synergistic advantages of the hybrid architecture are fully validated on MNIST, Fashion-MNIST, EMNIST and SVHN datasets. Quan Zhou et al. [17] combine Kolmogorov–Arnold networks with auto-encoders to construct a hybrid framework that effectively captures the complex temporal dependencies in data and exhibits powerful pattern modelling and reconstruction capabilities. The model performs well in the anomaly detection task of time-series data, and is able to accurately identify and locate the anomalous data points. Chenglin Fang et al. [18] proposed KANDU-Net which uses a KAN in combination with a U-Net to effectively capture local and global features through the KAN’s convolutional two-channel structure, and fuses the features using an auxiliary network. The above study shows that KANs exhibit superior approximation capability and computational efficiency over traditional architectures while maintaining model compactness through differentiable spline parameterisation and adaptive activation functions.

3. Proposed Methods

The Var-MNDR framework is divided into three parts, which are varistor image preprocessing, a model training phase, and a model testing phase, as shown in Figure 1.
(a) The original varistor image is processed through pixel channel processing and background colour removal to obtain a background-free varistor image. Elliptical feature fitting and affine transformation are used to obtain an elliptically corrected varistor image. Finally, Canny edge detection is used to find the maximum bounding rectangle to obtain the preprocessed varistor image.
(b) The model mainly consists of a reconstruction sub-network and a segmentation sub-network. In the model training stage, the defect-free original samples are first preprocessed, and the defect-free original samples are controllably damaged by the artificially synthesised pseudo-anomaly information to form pseudo-anomaly images containing pseudo-anomalies, and after the pseudo-anomaly images are inputted into the reconstruction sub-network, the reconstruction sub-network establishes the mapping relationship between normal texture features and abnormal texture features, and finally realises the accurate reconstruction of pseudo-anomalous regions according to the feature distribution law of normal samples.
(c) In the model testing stage, the defective sample images containing real defects are input, and after the same preprocessing process, the reconstruction sub-network reconstructs the potential defective regions that deviate from the normal texture space based on the a priori knowledge of the normal texture obtained from the training, and generates the defect-free reconstruction results that are consistent with the distribution of the normal texture space. Finally, the reconstructed image is sent to the segmentation sub-network along with the preprocessed image to generate the final defect segmentation image.

3.1. Image Preprocessing Algorithms

Since the varistor dataset is manually collected and the varistor image position is random, in order to remove redundant background data features and unify the pose of the varistor image, the varistor image preprocessing algorithm processing is shown in Figure 2.
The steps of the algorithm are described as follows:
Step 1 Firstly, the colour space of the background image without placing a varistor and the image to be processed with placing a varistor are converted from the RGB space to the HSV space and the H-channel pixel values of the two HSV colour space images are retained [19].
Step 2 The H-channel colour space image obtained in step 1 is subjected to image differencing and image threshold segmentation to obtain a mask image [20]. The differential image process can be described as follows:
Δ H = H x H X
where H X denotes the background image, H x denotes the image to be processed, and Δ H denotes the differential image.
Step 3 Record the pixel positions in the mask image of step 2 with a pixel value of 0. Assign a pixel value of the corresponding position of the pending image to 0, so as to obtain the pending image after removing the background.
Step 4 Fit the mask image with elliptic features [21] and the standard form of the elliptic equation is as follows:
A x 2 + B x y + C y 2 + D x + E y + F = 0
where B 2 4 AC < 0 , if the data point set ( x i , y i ) i = 1 N is given, define the parameter vector a = [ A , B , C , D , E , F ] T , and the least squares optimisation objective is as follows:
min a i = 1 N A x i 2 + B x i y i + C y i 2 + D x i + E y i + F 2
where B 2 4 AC < 0 , after fitting the elliptic equation coefficients A, B, C, D, E, F, calculate the centre coordinates, axis lengths and rotation angles and perform rotation and affine transformation according to the calculated results.
Step 5 Rotate and affine transform the resultant image obtained in step 3 according to the centre coordinates [22], axis length and rotation angle in step 4 to obtain an ellipse corrected image to be processed.
Step 6 Perform canny edge detection on the resultant image obtained in step 4, find the maximum outer rectangle and crop to obtain a normalised mask image [23]. Calculate the edge detection gradient strength M and the direction θ can be described as follows:
M = G x 2 + G y 2
θ = arctan G y G x
Step 7 The resultant image obtained in step 5 is cropped according to the maximum outer rectangle obtained in step 6 to obtain the normalised to-be-processed image.

3.2. Pseudo-Anomaly Generation Strategy

Since the defects only exist in the image body region, in order to make the model focus on the texture distribution of the image body region and reduce the model’s focus on the background region, as shown in Figure 3, a mask-constrained body pseudo-anomaly generation strategy is proposed.
Firstly, the Berlin noise generator is used to generate a random noise image with natural texture features (Figure 3, Perlin), the dynamic thresholds were obtained through uniform random sampling to generate diverse noise patterns ranging from minute pinholes, cracks, and scratches to larger patches, whilst simultaneously controlling the size and sparsity of the generated defect regions. Subsequently, the noise image undergoes binarisation to produce an initial defect mask image (Figure 3, Mp), combined with the normalised mask obtained in step 7 of Section 3.1 (Figure 3, Mt) as a spatial constraint, and the noise regions with intersections with the main region are selected as effective pseudo-anomaly candidate areas through morphological logic and an algorithm to screen out the noise region that intersects with the subject region as an effective pseudo-anomaly candidate (Figure 3, Ma), and in order to ensure the reasonableness of the texture features of the generated pseudo-anomaly image, random dynamic weighting coefficients are used to spatially superimpose the anomaly source image (Figure 3, anomaly source) with the original normal image (Figure 3, I) and ultimately generate pseudo-anomalies in accordance with the distribution of the features of the subject region. The synthetic image can be described as follows:
I A = I a + I p + I c
where I a = M a anomalysource , I p = M a I , I c = M a I , with M a being the inverse of M a .

3.3. Reconstruction Sub-Network

The reconstruction sub-network consists of a set of encoder and decoder architectures. In the network training phase, pseudo-anomalous images that have undergone mask constraints on the subject are fed into the reconstruction sub-network, which establishes the mapping relationship between normal texture features and abnormal texture features, and the pseudo-anomalous images are reconstructed as approximately similar original images according to the feature distribution law of normal samples, thus achieving an accurate reconstruction of pseudo-anomalous regions. Structural similarity (SSIM) and mean square error (MSE) are used as constraints; the former ensures global semantic consistency by measuring the similarity of local structures in the image, and the latter preserves detailed features by constraining pixel-level differences. The loss function of the reconstruction network can be expressed as follows:
L rec ( I , I r ) = λ L SSIM ( I , I r ) + ( 1 λ ) MSE ( I , I r )
where L SSIM ( I , I r ) and MSE ( I , I r ) denote the structural similarity and mean square error between the preprocessed and reconstructed images, respectively. λ is the weighting parameter that regulates the difference between the two losses, which is set to 0.5 for the experiment.

3.4. Segmentation Sub-Network

To achieve the precise identification of minor defects, we retained the overall topology of the U-Net, namely the ‘encoder-decoder-skip connection’ structure, while replacing the standard convolutional blocks in the encoder and decoder with G-KAN blocks. We also introduced Gaussian radial basis functions (GRBFs) as learnable activation functions for the KANs, thereby enhancing the model’s ability to express non-linear features and improving segmentation accuracy. Additionally, we utilise the GRBF to approximate B-spline functions and employ layer normalisation to keep the input within the RBF domain, thereby accelerating model training while optimising training efficiency without compromising performance.
The Kolmogorov–Arnold representation theorem states that any multivariate continuous function defined on a bounded domain can be represented as a two-level nested combinatorial form of a finite number of univariate functions, and the mathematical expression of the theorem is as follows [14]:
f ( x ) = f ( x 1 , , x n ) = q = 1 2 n + 1 Φ q p = 1 n ϕ q , p ( x p )
where ϕ q ( x p ) and ϕ p ( x p ) are a set of learnable activation functions, each acting on the pth component of the input and combining it by summation, and ϕ q is another set of learnable activation functions, which performs a non-linear transformation on the combined intermediate result and obtains the output by summing over 2n + 1 terms.
Assuming that the input of the KAN is x, x k is the output of the kth layer, and ϕ k is the activation function from the kth layer to the k + 1th layer, the output of the k + 1th layer can be expressed as follows:
x k + 1 = Φ k ( x k )
The GRBF is a typical local response function, exhibiting high sensitivity to minute variations in the input space near the centre point. As minute defects often manifest as slight anomalies in grayscale, texture, or edge information within local image regions, this function effectively amplifies such subtle differences, enhancing sensitivity for detecting minute defects. Moreover, the GRBF is continuously differentiable, exhibiting smooth and stable output variations near defect edges or central regions. This facilitates the precise localisation of defect boundaries. Using the GRBF to approximate the B-spline function, each channel feature is mapped in detail through the learnable GRBF. Compared with traditional activation functions, it can better capture the non-linear boundary features of subtle defects. The GRBF expression can be written as follows:
ϕ ( x ) = e ϵ x c 2
where x is the input vector, c is the centre of the basis function, ϵ is the shape parameter used to control the width of the function, and x c is the Euclidean distance between x and c. If Wk is defined as the weight of the kth layer, the Gaussian radial basis activation function for joining the KAN can be defined as follows:
GRBF ( x ) = k = 1 n W k · ϕ ( x k ) = k = 1 n W k · e ϵ x c 2
The segmentation sub-network employs multiple G-KAN blocks to construct a U-Net architecture with symmetric skip connections. The encoder progressively extracts high-level semantic features of defects through operations such as convolution and downsampling, while simultaneously reducing spatial resolution and expanding the receptive field. The decoder then gradually restores the spatial dimensions of the feature maps through upsampling and convolution, mapping the abstract semantic features extracted by the encoder back to the original image resolution to generate pixel-level defect segmentation feature maps. Skip connections transmit feature maps from corresponding encoder layers to corresponding decoder layers, enabling the fusion of high-resolution detail features from shallow encoder layers with semantic features from deep decoder layers. This compensates for the spatial information loss caused by downsampling. The overall architecture of the segmentation sub-network is shown in Figure 4.
The specific steps for the G-KAN block are as follows:
Step 1 To capture complex, non-linear patterns in the input features, we introduce the GRBF as the learnable activation function for the KAN, which maps the input features in a non-linear manner, transforming them from the original channel space to a high-dimensional feature space defined by learnable centre points.
Specifically, for the input feature map x R B × C × H × W , where B denotes the batch size, C denotes the number of input channels, and W and H denote the width and height of the feature map, respectively, we independently apply the GRBF(x) with K learnable centres c and shape parameters ϵ to each input channel to calculate its response value, resulting in a K-dimensional feature vector. Subsequently, this transformation is applied to all spatial positions in the original C channels, and the C K-dimensional vectors corresponding to each position are concatenated along the channel dimension to form a new K × C-dimensional channel feature. The input is transformed from the original x R B × C × H × W to x R B × ( K × C ) × H × W .
Step 2 Process X G R B F using a SplineConv layer with a smooth feature distribution adapted to the GRBF to promote non-linear feature combination. The parallel standard convolution path retains local correlations, and the fusion of the two enhances feature diversity. This processing can be described as follows:
Y SplineConv = W S p l i n e C o n v · X GRBF + b S p l i n e C o n v
Y Conv = W C o n v · SiLU ( X ) + b C o n v
where W S p l i n e C o n v and b S p l i n e C o n v represent the weights and biases of the SplineConv layer, respectively. W C o n v and b C o n v represent the weights and biases of the standard convolution layer, respectively.
Step 3 Finally, the output paths from the SplineConv layer and the standard convolution layer are fused via element-wise addition to generate the output feature map. This process can be described as follows:
Y Output = Y SplineConv + Y Conv

3.5. Definition of the Decision Threshold

This paper employs the ‘3 σ criterion’ to determine the decision threshold, which may be expressed as follows:
Threshold = μ + β · σ
where μ denotes the mean of the image, σ represents the standard deviation of the image, β is the coefficient of the standard deviation σ , and Threshold is the threshold value. Assuming that the segmentation result map after model segmentation is O u t p u t ( i , j ) , its mean μ and standard deviation σ can be expressed as follows:
μ = Output ( i , j ) N
σ = ( Output ( i , j ) μ ) N
where N denotes the total number of pixels across all images involved in the calculation. Consequently, the threshold determination formula can be transformed as follows:
Threshold = Output ( i , j ) N + β · ( Output ( i , j ) μ ) N
Through experimentation, the value of β adopted in this paper is 3.

4. Experimental

4.1. Experimental Details

The varistor surface image dataset acquisition device is shown in Figure 5, assembled from components such as an H2100 industrial camera, zoom lens 1, a lens fixing groove, zoom lens 2, a ring light source, a column, a trim knob, a fixing knob, a non-slip limiting ring and a metal base. To fully assess the model’s robustness and eliminate the impact of randomness, we conducted three independent experiments using the same training set, each running from model initialisation through to completion. The final experimental results represent the statistical mean of the outcomes from these three trials. The experimental processor is an Intel(R) Core(TM) i7-14700KF(Intel Corporation of the United States), and the GPU is an NVIDIA GeForce RTX 4080 SUPER(Taiwan, China GIGABYTE). The proposed model is implemented in the PyTorch deep learning platform based on PyTorch deep learning platform version 2.5.1, with the training period set to 750 epochs, and the Adam optimiser is used with an initial learning rate of 0.0001.

4.2. Datasets

Construct a single-colour and single-specification varistor surface anomaly detection dataset (VSADD) based on the varistor surface image dataset acquisition device described in Section 4.1. The dataset is an image of a 10 mm diameter model TVR10D391K varistor(Taiwan, China THINKING ELECTRONIC INDUSTRIAL CO., LTD), and the original dataset consists of 700 defect-free samples and 100 defective samples. In terms of data division, 300 defect-free samples are randomly selected as the training set, and the remaining 400 defect-free samples and 100 defective samples constitute the initial test set. It is worth noting that the original dataset images are RGB images with a size of 640 × 480 pixels, which are uniformly adjusted to 256 × 256 pixels after preprocessing to fit the model input requirements. In order to enhance the generalisation ability of the model, the test set is extended by rotational transformation and colour space enhancement, while artificial defect creation and algorithmic defect synthesis are introduced. The final test set size was expanded to 800 defect-free samples and 600 defect samples. Figure 6 demonstrates the varistor images of some of the defect-free samples and defective samples of the original dataset.
The data enhancement strategy [24] is systematically introduced in the model training phase to effectively reduce the overfitting problem caused by the relatively small size of the training set [25]. The pseudo-anomaly generation strategy uses the Describable Textures Dataset as the anomaly source dataset [26], which contains a total of 47 texture classes and 5640 images.

4.3. Evaluation Metrics

Model performance was evaluated using the Area Under the Receiver Operating Characteristic Curve (AUROC) and Average Precision (AP) metrics [27]. The AUROC represents the area under the ROC curve with a false positive rate (FPR) on the x-axis and a true positive rate (TPR) on the y-axis, serving to assess the model’s ability to distinguish positive and negative samples across different threshold values. The FPR and TPR can be described as follows:
FPR = FP FP + TN
TPR = TP TP + FN
where, TP (True Positive) indicates that the classifier predicts positive samples and is also actually positive; FP (False Positive) indicates that the classifier predicts positive samples but is actually negative; FN (False Negative) indicates that the classifier predicts negative samples but is actually positive; and TN (True Negative) means that the classifier predicts a negative sample and the actual sample is also negative.
The AP metric is the area under the Precision–Recall curve, where Precision and Recall can be expressed as follows:
Precision = TP TP + FP
Recall = TP TP + FN
Use FPS (Frames Per Second) to evaluate the real-time performance of the model. FPS indicates the speed at which the model processes images in real-time scenarios. FPS can be described as follows:
FPS = Number of frames processed Time   ( in seconds )

4.4. Experimental Results and Comparison Experiments

In order to fully evaluate the superiority of the proposed algorithm, experiments were conducted using PBAs [28], INP-Former(CVPR2025) [29], RealNet(CVPR2024) [10], SimpleNet(CVPR2023) [30] and Var-MNDR.
Figure 7 demonstrates the defect detection results of the five models, with the first row being the original image to be detected, the second row being the real labelled image, and the third to sixth rows being the PBAs, respectively, INP-Former, RealNet, and SimpleNet defect detection results, and the seventh row shows the defect detection results of the proposed model, which basically meets the defect detection requirements of real varistors. From the experimental results, we find that PBAs can only achieve coarse-grained judgement of the existence of defects and it is difficult to accurately locate the defects; INP-Former can identify the main defect locations but there is still an over-detection problem of misjudging the text area as defects; RealNet shows better defect localisation capability, but will incorrectly identify some of the varistor contours as defects, and for large defects will identify the large defect contour as a defect rather than the large defect as a whole; SimpleNet has better defect detection capability, but there is still room for improvement in the accuracy of defect boundary localisation.
In order to evaluate the performance of the Var-MNDR method more objectively and accurately, Table 1 shows the results of evaluation indexes of PBAs, INP-Former, RealNet, SimpleNet and Var-MNDR. From the results in the table, it can be seen that the overall performance of Var-MNDR is better than the other methods, which is due to the fact that some models fail to correctly identify some of the small defects in the test images or misclassify the text area and the varistor contour area as defects, which leads to poor defect detection results. In terms of real-time performance detection, we found that our real-time detection capability (73.67 FPS) is significantly superior to other mainstream unsupervised detection models; this stems primarily from its efficient single-pass forward propagation architecture and image preprocessing algorithm. Specifically, unlike other models, both our reconstruction and segmentation sub-networks are based on an encoder–decoder structure. This architecture requires no recursion nor multiple iterations; reconstruction and segmentation results are obtained through a single forward pass, which is the fundamental reason for achieving a high inference speed. Moreover, our proposed image preprocessing algorithm significantly reduces input data redundancy by uniformly normalising image poses and thoroughly eliminating irrelevant background. This ensures that every pixel fed into the neural network carries meaningful target region information, thereby preventing the model from performing unnecessary computations on vast amounts of irrelevant background pixels. This further enhances overall processing efficiency.

4.5. Generalisability Experiment

In order to fully verify the generalizability of the proposed varistor defect detection method, a multi-colour and multi-specification ZT-Varistor Surface Anomaly Detection Dataset (ZT-VSADD) is established. The dataset includes six varistor models, ZOV10D561K (10 mm), ZOV07D220K (7 mm), ZOV05D561K (5 mm), TVR10D391K (10 mm), TVR07D241K (7 mm), and TVR05D471K (5 mm), with the ZOV series in a blue package, the TVR series in blue packages and the TVR series in yellow packages to ensure model diversity, size diversity, and the appearance variability of the samples. The original dataset consists of 700 defect-free samples and 100 defective samples for each model. The training and test sets are divided and expanded in a similar way to VSADD (see Section 4.2).
Figure 8 shows the defect detection results of different models of varistors on PBAs, INP-Former, RealNet, SimpleNet, and Var-MNDR. The first row is the original to-be-detected image of each of the six models of varistors, the second row is the real-labelled image, the third to sixth rows are for the PBAs and INP-Former, RealNet, and SimpleNet defect detection results, and the seventh row shows the defect detection results of the proposed models.
Table 2 demonstrates the evaluation metric results of PBAs, INP-Former, RealNet, SimpleNet and Var-MNDR on the ZT-VSADD dataset, and it is worth noting that although the evaluation metric results of the other methods are higher than those of Var-MNDR for certain types of varistors, the generalisation performance of the proposed model is analysed in a holistic way and outperforms the other models. Specifically, PBAs primarily detect defects by aligning pre-trained features, exhibiting limited generalisation capability when confronted with data exhibiting significant distributional differences. They are susceptible to interference from background noise or normal textures, consequently yielding suboptimal detection accuracy. Whilst the Transformer architecture employed in the INP-Formatter method effectively models long-range dependencies, its attention mechanism tends to focus on the highest-contrast, most prominent regions within an image (such as text or sharp edges). This tendency can lead to certain textual areas being erroneously identified as defects. RealNet employs a diffusion process to generate highly realistic synthetic anomalies. When the texture features of test set images closely match its training data or prior knowledge, it produces remarkably effective anomalies with astonishing results. However, when confronted with out-of-distribution samples, the generated anomalies may lack representativeness, misleading the feature selection network and causing detection accuracy to plummet. SimpleNet’s minimalist architecture struggles to capture such subtle anomaly characteristics, particularly when defects resemble normal textures, making it prone to false negatives. Concurrently, its simplistic feature processing is more susceptible to background interference, resulting in suboptimal detection accuracy on certain models.

4.6. Experimental Results and Comparison Experiments

4.6.1. The Influence Experiment of Image Pre-Processing Algorithms

In order to systematically evaluate the impact of the proposed image preprocessing algorithm on the model performance, in the training phase, we only perform the basic preprocessing (including cropping and scaling operations) [31] on the original dataset without introducing any background removal process. In the testing phase, we first perform the same preprocessing operations as in the training phase on the original dataset images with the background preserved, and then fuse the ZT-VSADD images with the preprocessed background images through the ’with operation’ to construct the final test dataset. As shown in Table 3, the image preprocessing algorithm can effectively guide the model to focus on the texture features of the main image region and reduce the model’s attention to the background region, thus improving the accuracy of the model’s defect localisation.

4.6.2. The Influence Experiment of Kolmogorov–Arnold Networks

We combined the KAN with the U-Net architecture to construct the segmentation sub-network. The ZT-VSADD defect detection results shown in Table 4 indicate that our segmentation sub-network demonstrates significant advantages over the traditional U-Net in processing reconstructed images that are highly similar to the original image and in the task of micro-fine defect detection.
However, during the experiments, not all models of varistors show good advantages in terms of pixel-level accuracy, in which the varistors with models ZOV10D561K (10 mm) and ZOV05D561K (5 mm), on the contrary, show better segmentation advantages in terms of pixel-level accuracy when the KAN is not introduced. Our analysis suggests a possible cause: the KAN’s superior function-fitting capability may, in certain circumstances, misidentify some extremely subtle and rare normal textures as defects, segmenting them out and thereby introducing minor noise during pixel-level comparisons. But overall, the Var-MNDR effectively improves the model’s ability to represent complex image features by enhancing the feature extraction capability, resulting in a more accurate defect recognition performance.

5. Conclusions

This paper proposes an unsupervised detection method for surface defects in varistors under mask constraints, reconstructing a normal distribution. The approach extracts the main region through image preprocessing, employs a mask-constrained pseudo-anomaly generation strategy to guide the model’s focus on effective texture features, and combines a segmentation sub-network based on KANs to enhance the identification capability of minute defects. Experimental results demonstrate that this method exhibits superior detection accuracy, generalisation performance, and real-time processing capability in varistor defect detection tasks. This research not only provides an effective solution for varistor defect detection but also offers transferable insights for surface defect detection in other industrial components. For instance, by extracting the main mask of target objects, background separation can be similarly achieved, thereby adapting the pseudo-anomaly generation strategy employed herein. The core of the current strategy lies in enhancing model reconstruction capabilities through diverse local damage patterns. Future work may incorporate synthetic anomaly methods based on GANs or diffusion models to generate samples more closely resembling real defect distributions. In subsequent work, we shall further explore this method’s applicability across additional electronic components, systematically validating its generalisation capability across diverse electronic components. Concurrently, we shall investigate transfer learning mechanisms across components and defect types, exploring adaptation techniques for small-sample scenarios. This aims to significantly reduce reliance on annotations for novel defect types in practical industrial applications, thereby advancing the large-scale deployment of unsupervised detection methods within complex industrial environments.

Author Contributions

Conceptualization, X.X. and S.T.; methodology, X.X. and H.L.; software, X.X. and T.Z.; validation, X.X. and H.L.; datasets, X.X.; writing—original draft preparation, X.X.; writing—review and editing, X.X., T.Z. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Program of China (2018YFC0808300), and the Shaanxi Science and Technology Plan Key Industry Innovation Chain (Group)—Project in Industrial Field (2020ZDLGY15-07), Xi’an Science and Technology Program (23KGDW0032-2022).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The Describable Textures Dataset can be used as the anomaly source dataset for the pseudo-anomaly generation strategy and can be downloaded from the following address: https://www.robots.ox.ac.uk/~vgg/data/dtd/.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bai, D.; Li, G.; Jiang, D.; Yun, J.; Tao, B.; Jiang, G.; Sun, Y.; Ju, Z. Surface defect detection methods for industrial products with imbalanced samples: A review of progress in the 2020s. Eng. Appl. Artif. Intell. 2024, 130, 1–23. [Google Scholar] [CrossRef]
  2. Shiferaw, G.T.; Yao, L. Autoencoder-Based Unsupervised Surface Defect Detection Using Two-Stage Training. J. Imaging 2024, 10, 111. [Google Scholar] [CrossRef]
  3. Rasheed, A.; Zafar, B.; Rasheed, A.; Ali, N.; Sajid, M.; Dar, S.H.; Habib, U.; Shehryar, T.; Mahmood, M.T. Fabric defect detection using computer vision techniques: A comprehensive review. Math. Probl. Eng. 2020, 2020, 8189403. [Google Scholar] [CrossRef]
  4. Wang, J.; Xie, X.; Liu, G.; Wu, L. A Lightweight PCB Defect Detection Algorithm Based on Improved YOLOv8-PCB. Symmetry 2025, 17, 309. [Google Scholar] [CrossRef]
  5. Nasim, M.; Mumtaz, R.; Ahmad, M.; Ali, A. Fabric Defect Detection in Real World Manufacturing Using Deep Learning. Information 2024, 15, 476. [Google Scholar] [CrossRef]
  6. Peng, C.; Wang, J. Out-of-Distribution (OOD) Detection Based on Deep Learning: A Review. Electronics 2022, 11, 3500. [Google Scholar] [CrossRef]
  7. Zhang, H.; Wang, Z.; Zeng, D.; Wu, Z.; Jiang, Y.G. DiffusionAD: Norm-Guided One-Step Denoising Diffusion for Anomaly Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 7140–7152. [Google Scholar] [CrossRef] [PubMed]
  8. Choi, B.; Jeong, J. ViV-Ano: Anomaly Detection and Localization Combining Vision Transformer and Variational Autoencoder in the Manufacturing Process. Electronics 2022, 11, 2306–2309. [Google Scholar] [CrossRef]
  9. Sun, Z.J.; Wang, J.; Li, Y.K. RAMFAE: A Novel Unsupervised Visual Anomaly Detection Method Based on Autoencoder. Int. J. Mach. Learn. Cybern. 2024, 15, 355–369. [Google Scholar] [CrossRef]
  10. Zhang, X.; Xu, M.; Zhou, X. RealNet: A Feature Selection Network with Realistic Synthetic Anomaly for Anomaly Detection. In Proceedings of the 24th IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–18 June 2024. [Google Scholar]
  11. Li, C.L.; Sohn, K.; Yoon, J.; Pfister, T. CutPaste: Self-supervised Learning for Anomaly Detection and Localization. In Proceedings of the 22nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9664–9674. [Google Scholar]
  12. Zhang, X.; Shi, S.; Sun, H.; Chen, D.; Wang, G.; Wu, K. ACVAE: A Novel Self-adversarial Variational Auto-encoder Combined with Contrast Learning for Time Series Anomaly Detection. Sci. Rep. 2024, 171, 383–395. [Google Scholar] [CrossRef]
  13. Zhang, H.; Liu, S.; Wang, C.; Lu, S.; Xiong, W. Color-patterned Fabric Defect Detection Algorithm Based on Triplet Attention Multi-scale U-shape Denoising Convolutional Auto-encoder. J. Supercomput. 2024, 80, 4451–4476. [Google Scholar] [CrossRef]
  14. Liu, Z.; Wang, Y.; Vaidya, S.; Ruehle, F.; Halverson, J.; Soljačić, M.; Hou, T.Y.; Tegmark, M. KAN: Kolmogorov-Arnold Networks. arXiv 2024, arXiv:2404.19756. [Google Scholar]
  15. Alexander, D.B.; Antonio, S.T.; Spolski, J.N.; Pourteau, S. Convolutional Kolmogorov-Arnold Networks. arXiv 2024, arXiv:2406.13155. [Google Scholar]
  16. Ferdaus, M.M.; Abdelguerfi, M.; Ioup, E.; Dobson, D.; Niles, K.N.; Pathak, K.; Sloan, S. KANICE: Kolmogorov-Arnold Networks with Interactive Convolutional Elements. arXiv 2024, arXiv:2410.17172. [Google Scholar] [CrossRef]
  17. Zhou, Q.; Pei, C.; Sun, F.; Han, J.; Gao, Z.; Pei, D.; Zhang, H.; Xie, G.; Li, J. KAN-AD: Time Series Anomaly Detection with Kolmogorov-Arnold Networks. arXiv 2025, arXiv:2411.00278. [Google Scholar]
  18. Fang, C.; Wu, K. KANDU-Net: A Dual-Channel U-Net with KAN for Medical Image Segmentation. arXiv 2024, arXiv:2409.20414. [Google Scholar]
  19. Kumar, M.; Jindal, R.S. Fusion of RGB and HSV Colour Space for Foggy Image Quality Enhancement. Multimed. Tools Appl. 2019, 78, 9791–9799. [Google Scholar] [CrossRef]
  20. Li, C.; Ni, H.; Ukida, H.; Zhang, J.; Wang, B.; Lv, S. Surface Defect Detection of Steel Balls Based on Surface Full Expansion and Image Difference. Electronics 2024, 13, 4484. [Google Scholar] [CrossRef]
  21. Neetika, G.; Kumar, M.R. An Elliptical Sampling Based Fast and Robust Feature Descriptor for Image Matching. Multimed. Tools Appl. 2024, 83, 63149–63168. [Google Scholar]
  22. Anil, N.; Chandrappa, D. Satellite Image Matching and Registration Using Affine Transformation and Hybrid Feature Descriptors. Int. J. Adv. Intell. Paradig. 2023, 24, 126–144. [Google Scholar] [CrossRef]
  23. Maranga, O.J.; Nnko, J.J.; Xiong, S. Learned Active Contours via Transformer-based Deep Convolutional Neural Network Using Canny Edge Detection Algorithm. Signal Image Video Process. 2025, 19, 222. [Google Scholar] [CrossRef]
  24. Salazar, A.; Vergara, L.; Vidal, E. A proxy learning curve for the Bayes classifier. Pattern Recognit. 2023, 136, 109240. [Google Scholar] [CrossRef]
  25. D’Agostino, D.; Ilievski, I.; Shoemaker, A.C. Learning Active Subspaces and Discovering Important Features with Gaussian Radial Basis Functions Neural Networks. Neural Netw. 2024, 176, 106335. [Google Scholar] [CrossRef] [PubMed]
  26. Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; Vedaldi, A. Describing Textures in the Wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3606–3613. [Google Scholar]
  27. Martini, M.; Rosati, R.; Romeo, L.; Mancini, A. Data Augmentation Strategy for Generating Realistic Samples on Defect Segmentation Task. Procedia Comput. Sci. 2024, 232, 1597–1606. [Google Scholar] [CrossRef]
  28. Chen, Q.; Luo, H.; Gao, H.; Lv, C.; Zhang, Z. Progressive Boundary Guided Anomaly Synthesis for Industrial Anomaly Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 1193–1208. [Google Scholar]
  29. Luo, W.; Cao, Y.; Yao, H.; Zhang, X.; Lou, J.; Cheng, Y.; Shen, W.; Yu, W. Exploring Intrinsic Normal Prototypes within a Single Image for Universal Anomaly Detection. In Proceedings of the 25th IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 10–17 June 2025. [Google Scholar]
  30. Liu, Z.; Zhou, Y.; Xu, Y.; Wang, Z. SimpleNet: A Simple Network for Image Anomaly Detection and Localization. In Proceedings of the 23rd IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
  31. Sheth, R.; Parekha, C. An Intelligent Approach to Classify and Detect Image Forgery Attack (Scaling and Cropping) Using Transfer Learning. Int. J. Inf. Comput. Secur. 2024, 24, 322–337. [Google Scholar] [CrossRef]
Figure 1. Overall architecture of Var-MNDR methodology. (a) Preprocessing algorithm framework; (b) model architecture in the training phase; (c) model architecture in the testing phase.
Figure 1. Overall architecture of Var-MNDR methodology. (a) Preprocessing algorithm framework; (b) model architecture in the training phase; (c) model architecture in the testing phase.
Applsci 15 10479 g001
Figure 2. Schematic of the processing of the varistor image preprocessing algorithm. (a) Schematic diagram of the preprocessing algorithm; (b) visualisation of the preprocessing algorithm.
Figure 2. Schematic of the processing of the varistor image preprocessing algorithm. (a) Schematic diagram of the preprocessing algorithm; (b) visualisation of the preprocessing algorithm.
Applsci 15 10479 g002
Figure 3. Pseudo-anomaly generation strategy for mask-constrained subjects.
Figure 3. Pseudo-anomaly generation strategy for mask-constrained subjects.
Applsci 15 10479 g003
Figure 4. Segmentation sub-network overall architecture. (a) Sub-network architecture; (b) detailed description of the G-KAN block in the sub-network.
Figure 4. Segmentation sub-network overall architecture. (a) Sub-network architecture; (b) detailed description of the G-KAN block in the sub-network.
Applsci 15 10479 g004
Figure 5. Varistor surface image dataset capture device.
Figure 5. Varistor surface image dataset capture device.
Applsci 15 10479 g005
Figure 6. Partial defect-free image and defective image of TVR10D391K type varistor.
Figure 6. Partial defect-free image and defective image of TVR10D391K type varistor.
Applsci 15 10479 g006
Figure 7. Visual comparison of defect segmentation results across different methods on the VSADD dataset. From top to bottom: Original image, Ground truth image, PBA results, INP-Former results, RealNet results, SimpleNet results, and our method’s results.
Figure 7. Visual comparison of defect segmentation results across different methods on the VSADD dataset. From top to bottom: Original image, Ground truth image, PBA results, INP-Former results, RealNet results, SimpleNet results, and our method’s results.
Applsci 15 10479 g007
Figure 8. Visual comparison of defect segmentation results across different methods on the ZT-VSADD dataset. From top to bottom: Original image, Ground truth image, PBA results, INP-Former results, RealNet results, SimpleNet results, and our method’s results.
Figure 8. Visual comparison of defect segmentation results across different methods on the ZT-VSADD dataset. From top to bottom: Original image, Ground truth image, PBA results, INP-Former results, RealNet results, SimpleNet results, and our method’s results.
Applsci 15 10479 g008
Table 1. Comparison of quantitative results between the proposed method on the VSADD dataset and various unsupervised detection methods.
Table 1. Comparison of quantitative results between the proposed method on the VSADD dataset and various unsupervised detection methods.
MethodI-AUROC%P-AUROC%AP%FPS
PBAS94.2887.4294.883.88
INP-Former (CVPR2025)95.6896.3194.6237.31
RealNet (CVPR2024)95.8696.5195.3823.16
SimpleNet (CVPR2023)94.2591.2895.832.83
Ours96.1299.3396.2276.67
Table 2. Comparison of quantitative results between the proposed method on the ZT-VSADD dataset and various unsupervised detection methods. (I-AUROC%/P-AUROC%/AP%).
Table 2. Comparison of quantitative results between the proposed method on the ZT-VSADD dataset and various unsupervised detection methods. (I-AUROC%/P-AUROC%/AP%).
MethodZOV05D561KZOV07D220KZOV10D561KTVR05D471KTVR07D241KTVR10D391K
PBAS92.76/94.48/94.1793.29/88.13/94.0694.31/91.28/95.6096.01/93.65/98.4794.59/94.21/97.2294.28/87.42/94.88
INP-Former97.21/98.13/93.6596.17/98.35/95.8397.08/98.00/97.1696.17/94.99/96.8996.58/94.76/98.4095.68/96.31/94.62
RealNet85.77/98.18/96.8293.21/98.46/93.9996.03/98.28/93.6699.99/86.41/99.9797.48/85.94/96.4295.86/96.51/95.38
SimpleNet95.27/71.86/91.6695.36/92.18/95.8395.06/90.03/97.1695.39/93.02/96.6693.66/87.70/94.8394.25/91.28/95.83
Ours98.05/98.72/97.7897.58/98.51/ 97.9098.14/97.73/97.6799.85/99.82/99.8598.48/99.89/98.4896.12/99.33/96.22
Table 3. Experimental results of image preprocessing algorithm.
Table 3. Experimental results of image preprocessing algorithm.
TypesBasic PreprocessingOurs
I-AUROC (%)P-AUROC (%)AP (%)I-AUROC (%)P-AUROC (%)AP (%)
ZOV05D561K98.2096.6897.3498.0598.7297.78
ZOV07D220K97.3294.3697.1297.5898.5197.90
ZOV10D561K97.9397.2297.2198.1497.7397.67
TVR05D471K98.5883.9197.6999.8599.8299.85
TVR07D241K98.4587.5097.3998.4899.8998.48
TVR10D391K95.8293.7394.3096.1299.3396.22
Table 4. Experimental results for Kolmogorov–Arnold Networks.
Table 4. Experimental results for Kolmogorov–Arnold Networks.
TypesNo KANsOurs
I-AUROC (%)P-AUROC (%)AP (%)I-AUROC (%)P-AUROC (%)AP (%)
ZOV05D561K97.5299.8797.3298.0598.7297.78
ZOV07D220K93.6997.4296.9997.5898.5197.90
ZOV10D561K96.2299.6798.2098.1497.7397.67
TVR05D471K96.2075.3693.7599.8599.8299.85
TVR07D241K96.5478.7493.7698.4899.8998.48
TVR10D391K96.0098.8095.4896.1299.3396.22
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, S.; Xu, X.; Li, H.; Zhou, T. Unsupervised Detection of Surface Defects in Varistors with Reconstructed Normal Distribution Under Mask Constraints. Appl. Sci. 2025, 15, 10479. https://doi.org/10.3390/app151910479

AMA Style

Tang S, Xu X, Li H, Zhou T. Unsupervised Detection of Surface Defects in Varistors with Reconstructed Normal Distribution Under Mask Constraints. Applied Sciences. 2025; 15(19):10479. https://doi.org/10.3390/app151910479

Chicago/Turabian Style

Tang, Shancheng, Xinrui Xu, Heng Li, and Tong Zhou. 2025. "Unsupervised Detection of Surface Defects in Varistors with Reconstructed Normal Distribution Under Mask Constraints" Applied Sciences 15, no. 19: 10479. https://doi.org/10.3390/app151910479

APA Style

Tang, S., Xu, X., Li, H., & Zhou, T. (2025). Unsupervised Detection of Surface Defects in Varistors with Reconstructed Normal Distribution Under Mask Constraints. Applied Sciences, 15(19), 10479. https://doi.org/10.3390/app151910479

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop