Next Article in Journal
Performance Assessment of Three Latent Heat Storage Designs for a Solar Hot Water Tank
Next Article in Special Issue
A Short-Term Prediction Model of Wind Power with Outliers: An Integration of Long Short-Term Memory, Ensemble Empirical Mode Decomposition, and Sample Entropy
Previous Article in Journal
Wearable Artificial Intelligence for Assessing Physical Activity in High School Children
Previous Article in Special Issue
Energy Management Strategy for Seaport Integrated Energy System under Polymorphic Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transmission Line Equipment Infrared Diagnosis Using an Improved Pulse-Coupled Neural Network

1
Artificial Intelligence Application Department, China Electric Power Research Institute, Beijing 100192, China
2
State Grid Gansu Electric Power Company, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(1), 639; https://doi.org/10.3390/su15010639
Submission received: 17 November 2022 / Revised: 18 December 2022 / Accepted: 21 December 2022 / Published: 30 December 2022

Abstract

:
In order to detect the status of power equipment from infrared transmission line images under the spatial positioning relationship of the transmission line equipment, such as corridor, substation equipment, and facilities, this paper presents an improved PCNN model which merges an optimized parameter setting method. In this PCNN model, the original iteration mechanism is abandoned, and instead, the thresholding model is built by the maximum similarity thresholding rule. To ensure similarity during classifying neighboring neurons into cluster centers, a local clustering strategy is used for setting the linking coefficient, thus improving the efficiency of the method to detect the power equipment in infrared transmission line images. Finally, experimental results on transmission line infrared images show that the proposed method can provide the basis for the diagnosis of power equipment, preventing the casualties and property damage caused by the thermal damage of power equipment, and effectively improving the safety risk identification and operation control ability of power grid engineering.

1. Introduction

Transmission line inspection is important to ensure the normal operation of high-voltage transmission lines, which is of great significance to the sustainable supply of power and the safe and rapid economic development of power in China. In the inspection of power transmission lines infrared imaging technology, as an important tool for long-distance detection of detecting equipment faults, has the advantages of non-contact detection, high safety, high accuracy and simple operation. However, the detected fault information often was recorded by workers on site, increasing the inspection time and miss detection. Accordingly, image processing technology, such as image segmentation, has become a significant tool for automatically detecting equipment faults in power transmission lines.
A pulse-coupled neural network (PCNN) is an iterative model inspired by the observation of synchronous pulse bursts in the visual cortex of small mammals such as cats [1]. The underlying idea of this model was to bridge temporal gaps and encourage the neurons with spatial proximity and brightness similarity to pulse together and was soon recognized as having attractive application prospects in the field of image processing, including image segmentation [2]. For years, PCNN has been widely applied to segmenting a wide range of real-world images, such as infrared images [3,4,5,6,7,8], and other natural images [9,10,11].
In general, the performance of the PCNN model to image segmentation is heavily dependent on the values of network parameters. Among these parameters are amplitudes and decay constants in each leaky integrator, two weight matrices, the linking coefficient, etc., thus making it more flexible for adjusting the neuron’s behavior in image segmentation. Additionally, the mechanism of parameter settings remains uncertain in processing a wide range of images. So, in some cases, some of these parameter values are often determined by trial and error [12,13,14].
During the last few years, much work has been devoted to parameter settings including simplifications. The simplified or modified model developed by Kuntimad and Ranganath [15] was regarded as a classic model, which may guarantee perfect segmentation for an image after obtaining the intensity ranges of adjacent regions. Followed by this model, Raya et al. [16] treated the linking coefficient and the primary firing threshold as global and local values, and then obtained their values directly from image statistics. To build the relationship between the dynamic property of neurons and the image characteristics, Chen [17] built some criteria to deal with the parameters of the simplified model. Besides, the methods in Refs [3,4] tried to establish a simple way to segment the infrared image. The derived conditions are so strict to treat with image characteristics [18]. Recently, efficient ways proposed in the literature [19,20] utilize local information for preserving the characteristics of the synchronous pulse inherent in PCNN. However, the final result has close ties with the clustering method.
As in the output of neurons, the PCNN model has a fundamental mechanism for obtaining the results via setting the parameter of the neural threshold. The general neural threshold inherent in PCNN increases once the neuron fires, and then it decays until the corresponding neuron fires again, thus creating dynamic changes for each neuron. Wei [12] established the threshold decay time constant with the overall features of target images according to the subjective feeling of human eyes, logarithmically related to the actual light intensity. Particularly, a strategy to linearly decrease the value of the threshold was utilized for the sake of computer processing [8,21,22,23,24,25]. However, the neural threshold by periodical decaying seems to not be fit for image segmentation because of its periodic repetition. Kong [6,7] had thereby entirely abandoned this mechanism, but instead, used the average gray value of a peak in the histogram as the corresponding threshold. In Ref. [26], Karvonen used a fixed classwise threshold value for each class which is given by the Expectation-Maximization method from the histogram. Later, Li [27] derived a method using the water region area in the histogram. In fact, the threshold of targets may be missing through histogram analysis. In the literature [3,4,20], image characteristics were used instead of histogram analysis, allowing the threshold to be updated adaptively. Usually, the behavior of these methods is similar to an alternative to global region growing but differs from the seeded region method with PCNN [28,29,30]. However, intensity-inhomogeneity in images often occurs, which makes PCNN unsuitable for image segmentation according to the above parameter setting methods. Besides, there are many model parameters, so it is not easy to build a reasonable setting method for promoting the performance of image segmentation.
In this paper, a novel strategy for image segmentation is presented by a modified version of PCNN. The fundamental idea of this method is to merge a similarity thresholding method. Then, a new relationship between the parameters and the available information such as the pulse output and image characteristics is built to enable the model to extract the target. The main contributions of this paper are related to two important methodological issues. The first one is related to a maximum thresholding rule to determine the threshold of the modified PCNN model. The second contribution concerns the regulation of the linking coefficient, activating similar neurons as much as possible to make the segmentation performance better.
The organization of the paper is as follows: in Section 2, the original PCNN and its modification are described for image segmentation. The maximum similarity thresholding rule is built and the linking coefficient is set by local clustering in Section 3. In Section 4, experimental results of the proposed PCNN on real-world transmission line infrared images are given, and the results are then compared with some widely used methods to demonstrate the performance of image segmentation; finally, conclusions are drawn in Section 5.

2. Pulse-Coupled Neural Network and Its Modification

2.1. The Original PCNN Model

PCNN is a two-dimensional neural network. Each neuron in the processing layer is directly mapped to a corresponding pixel in the image and generates the pulse output from a set of neighboring neurons. Generally, a model of a pulse-coupled neuron, illustrated in Figure 1, mainly consists of three parts as follows: (1) the input part, which takes the acceptance of external stimulus as well as the pulse output from neighboring neurons; (2) the coupled part, which increases the internal activity of neuron by coupling the linking part and feeding part; (3) the pulse output, which generates a pulse via comparison.
In contrast to artificial neural networks, PCNN can be regarded as a third-generation neural network without any training. Its fundamental idea is to obtain the result by iterating the following equations.
F x ( n ) = e α F F x ( n 1 ) + I x + V F y N x M x , y Y y ( n 1 )
L x ( n ) = e α L L x ( n 1 ) + V L y N x W x , y Y y ( n 1 )
U x ( n ) = F x ( n ) [ 1 + β L x ( n ) ]
E x ( n ) = e α E E x ( n 1 ) + V E Y x ( n 1 )
Y x ( n ) = { 1                           U x ( n ) > E x ( n ) 0                           o t h e r w i s e
Here, F and L, respectively, are the feeding and linking inputs, in which the neuron receives input from external stimulus I (e.g., intensity), and the pulse output Y is generated by neighboring neurons Ny. Basically, the neuron accumulates the internal activity U until it exceeds its inner threshold E. Once the neuron fires or generates a pulse (Yx = 1), the result will make the threshold change, as well as the linking and feeding input and neighboring neurons at the next iteration. Through PCNN iteration, it will produce a sequence of binary images that can be utilized for image processing, such as segmentation, and feature extraction.
It is worth noting that the original PCNN model involves many parameters that can be altered to adjust the behavior of neurons. Among these parameters, some of them have proved to have a significant effect on image segmentation [2,17], especially the threshold constant αE and linking coefficient β. Besides, the weight W/M also plays important role in connecting the neighboring neurons and transmitting the pulse to each other. For other parameters, three amplitude constants VF, VL and VE, cause the effect of the pulse output Y, while the decay constants αF and αL generate the dynamics of neuronal activity.

2.2. Modification of PCNN Model

In this part, a modified version of PCNN will be described, and the meaning of modified parts for subsequent parameter settings are given.
In the modified model, the major changes lie in the feeding input and linking input, neural threshold, as seen in Figure 2. What should be stressed is that some significant characteristics inherent in the original PCNN are maintained for image segmentation, such as synchronous pulse.
Let each neuron x have an identical model, where x∈Ω mapped to the image domain. For a given neuron x, the equations of a single iterative procedure can be listed as follows:
F x ( n ) = y N x M x , y I y
L x ( n ) = y N x W x , y Y y ( n 1 )
U x ( n ) = F x ( n ) [ 1 + β L x ( n ) ]
E x ( n ) = V E H E ( F x V E )
Y x ( n ) = { 1                         U x ( n ) > E x ( n )   0                       o t h e r w i s e
where all the notations have the same meaning as mentioned in Equations (1)–(5), except the notations in Equation (9). Here, HE denotes the Heaviside function, and is defined as
H E ( z ) = { 0                 z 0 1                 z > 0
The move to these changes abandons some parts of the biological mechanism inherent in PCNN, but it allows for far greater control of the model behavior, making it worthwhile to promote the potential performance of segmentation.

3. Optimized Parameter Setting Method for PCNN

3.1. The Maximum Thresholding Rule

The maximum similarity thresholding (MST) rule is a kind of thresholding method, which measures the distance under the transformation space according to the result of the segmentation at different thresholds, and its representation is as follows
t * = arg max t { S ( T ( χ ) , H ( γ ( t ) ) ) }
where T and H are, respectively, transformation functions; χ denotes the original image; γ (t) is the segmentation result under the threshold t; S(χ, γ(t)) is the similarity measurement function, which is defined as follows:
S ( χ , γ ( t ) ) = i = 1 N ( x i μ χ ) ( ω i μ γ ) i = 1 N ( x i μ χ ) 2 i = 1 N ( ω i μ γ ) 2
where xi and ωi correspond to the i-th pixel value in the image χ and image γ(t), respectively; μχ and μγ represent the mean of image χ and image γ(t), and N represents the number of pixels. Notably, the larger the S value is, the more similar the images χ and image γ(t) is.
Additionally, according to Equation (12), the segmentation result γ is usually a binary image generated by the PCNN model. The similarity can be calculated directly with the image itself without H transformation, the threshold value can be written by
t * = arg max t { S ( T ( χ ) , γ ( t ) ) }
To find the optimal threshold, the optimal evaluation can be made by using images χ and γ. By introducing the boundary information into the transformation functions H and T in Equation (12), the final segmentation result of the threshold value is associated with the region boundary, thus laying a foundation for the complete extraction of the region. Without loss generation, the transformation functions T can be set as follows.
T ( χ ) = G ( x , y ;       σ ) χ
where x, y denotes the position in the image; * denotes the convolution sign; ∇ is the gradient operator; σ denotes the scale value; G is the Gaussian function as follows
G ( x , y ;       σ ) = 1 2 π σ e ( x 2 + y 2 ) / 2 σ 2
In order to find the best value of t from Equation (14), the transformation function H usually adopts the boundary between the binary image of the target and the background produced after the segmentation. Here, the morphological operator for rapid processing is adopted, i.e.,
H ( A ) = A ( A Θ B )
where A is a thresholding result; B is the operator from four-neighbor; Θ demotes the erode operator.

3.2. Linking Coefficient β

The linking coefficient β inherent in PCNN is used to force the neighboring neurons with brightness similarity to pulse together. Inspired by the clustering method, the choice of designing the linking coefficient in this study is defined as follows
min β         f ( β ) = x X μ ( S x ) ( F x E x ) 2 s . t .               0 < β < ξ
where μ(·) denotes the measure of the similarity, and the value of β is not apparent in the cost function but is implied in the value of Sx as follows
S x = { F x ( 1 + β L x ( n ) )                   U x ( n ) > E x ( n ) F x                                                                   o t h e r w i s e ,           x X
where X is a set of eight-neighboring neurons around the boundary of the pulse output region.
In this work, to partition pixels into the background and object, the membership function associated with the distance to the obtained center of the object (background) is built as follows
μ o ( g ) = { 0                                                                   g a 1 ( 1 + ( g b ) 2 / ( g a ) 2 )                 a < g b 1                                                                   g > b
where g is a variable in the intensity domain, a is set as m2(n), and b is set as Sx at the n-th iteration. The intensity range [a, b] is usually considered a fuzzy interval. From Equation (20), a set of pixels in X can then transform from the intensity domain into the fuzzy domain, assigning a large membership value to the pixel whose intensity is close to the center.
To obtain the value of β, the golden section method was used to find the optimal solution, and a sufficiently large value ξ = 2 is set initially at each PCNN iteration.
In summary, the parameter setting of the modified PCNN model is associated with the characteristic of the synchronous pulse. The neighboring neurons surrounding the fired neurons are linked together by setting the linking coefficient β and generate the pulse according to the maximum thresholding rule. There is a complete algorithm description in the following (see Algorithm 1).
Algorithm 1: our PCNN model for image segmentation
1 Input-- test images;
-- PCNN parameter M/W in literature [2]
-- the initial PCNN neural threshold E whose value is the highest intensity of the image, and ξ = 2.
2 PCNN iterationrepeat
       Compute the parameter β in Equation (18);
       Calculate F, L, and U through iteration, as seen in Equations (6)–(8);
       Update neural threshold E in Equations (9) and (12);
       Update Y by Equation (10).
until the pulse region Y does not change any more
3 Output--the marked region R as the final segmentation result

4. Experimental Results

In this section, some experiments on real-world images (see Figure 3) are performed to assess the effectiveness of the proposed method. The results provided by the proposed method were then compared with the following segmentation methods, including Otsu’s (OTSU) [31], Maximum similarity thresholding (MST) [32], Fuzzy c-means (FCM) [33], Meanshift clustering method [34]. In the experiments, without being otherwise specified, for a fair comparison, set d = 1, δβ = 0.01, δμmax = 0.15, and βmax = 1 for Wei’s method. In our model, the pulse output Y(0) was initialized in which the neurons have the highest feeding input at each level. All the algorithms were implemented in Matlab 7.10 (2010a, MathWorks, Natick, America) and performed on a computer with Intel(R) Core (TM) 2.11 GHz i7 CPU RAM 16G, and Windows 64-bit operating system.
Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 illustrate the segmentation results of OTSU, MST, FCM, mean shift clustering algorithm, and the proposed method, respectively. It is difficult to find that the threshold value obtained by the OTSU method can separate the background and the target. The MST algorithm adds gradient information from the background class and the target class to determine the optimal threshold, as shown in Figure 9. Generally, the edges of the target are clear to separate the target from the background. However, several maximum values of similarity occur during the threshold traversal, and the optimal threshold does not exist in the high peak value of maximum similarity, as shown in Figure 10. The threshold is listed as shown in Table 1. It can be found that the thresholds obtained by OSTU and MST methods are relatively low, so the segmentation results cannot effectively extract the target well.
Compared with the threshold method, the FCM method classifies pixels based on the principle of maximum membership degree. However, the FCM method may classify the intensity of the pixels without the consideration of target distribution, because the target often appears with high intensity in the image. The final means of classes are given in Table 2. From the segmentation results shown in Figure 8, the mean values of the background and the target (i.e., v1 and v2) obtained by the proposed method seem close to the actual value, although some pixels are a misclassification. By comparison, the mean values obtained by the FCM method are not better than that of the proposed method. Some of them are biased from the actual centers.
The Meanshift algorithm is based on the maximum clustering of probability density, and therefore, it is superior for pixel clustering. It can be found in Figure 7 that the whole image is divided into multiple regions, eventually. However, for this clustering method with great probability density characteristics, it is also closely related to the grayscale distribution of regional pixels. For example, in the second image segmentation results, part of the high-brightness area of the transmission line is classified into other areas. Moreover, for the fourth image, the presence of multiple cluster areas around the failure area occurred due to the inhomogeneity of the target.
In the proposed method, the segmentation performance is associated with the maximum similarity thresholding during PCNN iteration. The results are shown in Figure 8. From the similarity evaluation of the segmentation results in Table 3, it can be found that the evaluation value of the proposed method is lower than the global similarity, but the obtained results are better than the MST algorithm. Additionally, combined with the membership assignment method for adjusting the linking coefficient, the region extraction effect had significant improvement.
Table 4 lists the contrast in the time complexity of the mentioned methods. The OTSU method is the least time-consuming, and the MST methods as threshold segmentation have low time consumption. By contrast, FCM and Meanshift as clustering algorithms need the iterative clustering of image pixels, so the time complexity is higher than the thresholding method. The proposed method is partly limited by similarity threshold calculation and neighborhood membership assignment and iterative clustering, so the time complexity increases compared to MST, but it is lower compared to clustering methods such as FCM and Meanshift algorithm,
In the near future, the algorithm will be applied to the safety production risk control platform. Combined with the operation control module of 3D information, it is very convenient to manage and control the electric power equipment fault. Moreover, the early warning of security risks can be predicted, thus improving the power grid engineering safety risk identification and operation control ability.

5. Conclusions

In this paper, an effective strategy for the diagnosis of power equipment in the transmission line is proposed which is based on a modified PCNN. The parameters of the PCNN model can interact with each other via the characteristic of synchronous pulse and the maximum similarity thresholding rule, allowing the model to be self-organized to segment images. By incorporating the local clustering into the framework of the linking coefficient, our model can segment the image with high performance. Finally, in order to assess the effectiveness of the proposed model, the experiments on the real-world images are tested, and the results yielded by the proposed method have more competition than that yielded by some existing segmentation methods and have demonstrated the advantage of our model in terms of strategy to image segmentation. However, the critical issues that should further be considered consist of the following aspects: (1) the drift of the maximum threshold occurs when the intensity-inhomogeneity appears in the infrared target, making the performance of PCNN segmentation become worse; (2) During the calculation of the linking coefficient, the obtained small value often make more iterations. In the near future, a great effort will be made to adjust the threshold by considering the intensity-inhomogeneity and finding a better rule for the selection of the linking coefficient. Furthermore, by combining with the three-dimensional information of the operation control module, it can promote the ability of safety risk identification and operation control of power grid engineering in the inspection of a transmission line.

Author Contributions

Y.T. provided the idea of work, J.T. implemented the idea by MATLAB, X.Z. organized the framework of this manuscript, C.C. set up the experiment platform, Z.C. collected the experiment data, and Z.H. conducted the calculation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the research on security Risk identification and Operation Control technology of Power grid Engineering based on 3D spatial information fusion, grant number 5700-202233182A-1-1-ZN, and Supported by the Science and Technology Project of Headquarters of State Grid Corporation of China.

Data Availability Statement

The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Echhorn, R.; Reitboeck, H.J.; Arndt, M.; Dicke, P.W. Feature linking via synchronization among distributed assemblies: Simulations of results from cat visual cortex. Neural Comput. 1990, 2, 293–307. [Google Scholar] [CrossRef]
  2. Wang, Z.B.; Ma, Y.D.; Cheng, F.Y.; Yang, L.Z. Review of pulse coupled neural networks. Image Vis. Comput. 2010, 28, 5–13. [Google Scholar] [CrossRef]
  3. Gao, C.; Zhou, D.G.; Guo, Y.C. Automatic iterative algorithm for image segmentation using a modified pulse-coupled neural network. Neurocomputing 2013, 119, 332–338. [Google Scholar] [CrossRef]
  4. Zhou, D.; Gao, C.; Guo, Y.C. Simplified pulse coupled neural network with adaptive multilevel threshold for infrared human image segmentation. J. Comput.-Aided Des. Comput. Graph. 2013, 25, 208–214. [Google Scholar]
  5. Zhou, D.; Chi, M. Pulse-coupled neural network and its optimization for segmentation of electrical faults with infrared thermography. Appl. Soft Comput. 2019, 7, 252–260. [Google Scholar] [CrossRef]
  6. Kong, X.W.; Huang, J.; Shi, H. Infrared image multi-threshold segmentation algorithm based on improved pulse coupled neural networks. J. Infrared Millim. Waves 2001, 20, 365–379. [Google Scholar]
  7. Kong, X.W.; Huang, J.; Shi, H. Improved pulse coupled neural network for target segmentation in infrared images. In Proceedings of the SPIE-The International Society for Optical Engineering, Wuhan, China, 10–12 October 2001. [Google Scholar]
  8. Chen, X.J.; Chai, X.D. Infrared image segmentation based on a simplified PCNN. J. Anhui Univ. Nat. Sci. Ed. 2010, 34, 74–77. [Google Scholar]
  9. Ge, L.J.; Liu, H.X.; Li, Y.Z.; Zhang, J. A virtual data collection model of distributed PVs considering spatio-temporal coupling and affine optimization reference. IEEE Trans. Power Syst. 2022. early access. [Google Scholar] [CrossRef]
  10. Ge, L.J.; Liu, J.H.; Yan, J.; Rafiq, M.U. Improved harris hawks optimization for configuration of PV intelligent edge terminals. IEEE Trans. Sustain. Comput. 2022, 7, 631–643. [Google Scholar] [CrossRef]
  11. Shi, M.H.; Jiang, S.S.; Wang, H.R.; Xu, B.G. A simplified pulse-coupled neural network for adaptive segmentation of fabric defects. Mach. Vis. Appl. 2009, 20, 131–138. [Google Scholar] [CrossRef]
  12. Wei, S.; Hong, Q.; Hou, M.S. Automatic image segmentation based on PCNN with adaptive threshold time constant. Neurocomputing 2011, 74, 1485–1491. [Google Scholar] [CrossRef]
  13. Fu, J.C.; Chen, C.C.; Chai, J.W.; Wong, S.T.C.; Li, I.C. Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging. Comput. Med. Imaging Graph. 2010, 34, 308–320. [Google Scholar] [CrossRef] [PubMed]
  14. Cheng, D.S.; Liu, X.F.; Tang, X.L.; Liu, J.F.; Huang, J.H. Image segmentation method based on an improved PCNN. Chin. High Technol. Lett. 2007, 17, 1228–1233. [Google Scholar]
  15. Kuntimad, G.; Ranganath, H.S. Perfect image segmentation using pulse coupled neural networks. IEEE Trans. Neural Netw. 1999, 10, 591–598. [Google Scholar] [CrossRef]
  16. Rava, T.H.; Bettaiah, V.; Ranganath, H.S. Adaptive pulse coupled neural network parameters for image segmentation. World Acad. Sci. Eng. Technol. 2011, 73, 1046–1056. [Google Scholar]
  17. Chen, Y.L.; Park, S.K.; Ma, Y.D.; Ala, R. A new automatic parameter setting method of a simplified PCNN for image segmentation. IEEE Trans. Neural Netw. 2011, 22, 880–892. [Google Scholar] [CrossRef]
  18. Fang, Y.; Qi, F.H.; Pei, B.Z. PCNN implementation and applications in image processing. J. Infrared Millim. Waves 2005, 24, 291–295. [Google Scholar]
  19. Gao, C.; Zhou, D.G.; Guo, Y.C. An iterative thresholding segmentation model using a modified pulse coupled neural network. Neural Process. Lett. 2014, 30, 81–95. [Google Scholar] [CrossRef]
  20. Zhou, D.G.; Gao, C.; Guo, Y.C. A coarse-to-fine strategy of iterative segmentation using simplified pulse-coupled neural network. Soft Comput. 2014, 18, 557–570. [Google Scholar] [CrossRef]
  21. Bi, Y.W.; Qiu, T.S.; Li, X.B.; Guo, Y. Automatic image segmentation based on a simplified pulse coupled neural network. Lect. Notes Comput. Sci. 2004, 3174, 405–410. [Google Scholar]
  22. Zhu, S.W.; Hao, C.Y. An approach for fabric defect image segmentation based on the improved conventional PCNN model. Acta Electron. Sinca 2012, 40, 611–617. [Google Scholar]
  23. Qi, Y.F.; Huo, Y.L.; Zhang, J.S. Automatic image segmentation method based on simplified PCNN and minimum scatter within clusters. J. Optoelectron. Laser 2008, 19, 1258–1260+1264. [Google Scholar]
  24. Peng, Z.M.; Jiang, B.; Xiao, J.; Meng, F.B. A novel method of image segmentation based on parallelized firing PCNN. Acta Autom. Sin. 2008, 34, 1169–1173. [Google Scholar] [CrossRef]
  25. Gu, X.D.; Guo, S.D.; Yu, D.H. A new approach for automated image segmentation based on unit-linking PCNN. In Proceedings of the International Conference on Machine Learning and Cybernetics, Beijing, China, 4–5 November 2002. [Google Scholar]
  26. Karvonen, J.A. Baltic sea ice SAR segmentation and classification using modified pulse-coupled neural networks. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1566–1574. [Google Scholar] [CrossRef]
  27. Li, M.; Cai, W.; Li, X.Y. An adaptive image segmentation method based on a modified pulse coupled neural network. Lect. Notes Comput. Sci. 2006, 4221, 471–474. [Google Scholar]
  28. Stewart, R.D.; Fermin, I.; Opper, M. Region growing with pulse-coupled neural networks: An alternative to seeded region growing. IEEE Trans. Neural Netw. 2002, 13, 1557–1562. [Google Scholar] [CrossRef]
  29. Lu, Y.F.; Miao, J.; Duan, L.J.; Qiao, Y.H.; Jia, R. A new approach to image segmentation based on simplified region growing PCNN. Appl. Math. Comput. 2008, 205, 807–814. [Google Scholar] [CrossRef]
  30. Yan, C.M.; Guo, B.L.; Ma, Y.D.; Zhang, X. New adaptive algorithm for image segmentation using the dual-level PCNN model. J. Optoelectronics. Laser 2011, 22, 1102–1106. [Google Scholar]
  31. Otsu, N. A threshold selection method from gray-level histogram. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  32. Zou, Y.; Dong, F.; Lei, B. Maximum similarity thresholding. Digit. Image Process. 2014, 28, 120–135. [Google Scholar] [CrossRef]
  33. Ahmed, M.N.; Yamany, S.M.; Mohamed, N.; Farag, A.A.; Moriarty, T. A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans. Med. Imaging 2002, 21, 193–199. [Google Scholar] [CrossRef] [PubMed]
  34. Qiyin, W.; Jiandong, X.; Xinhui, R. An adaptive segmentation method of substation equipment infrared image. Infrared Technol. 2016, 38, 770–773. [Google Scholar]
Figure 1. Schematic representation of a general pulse-coupled neuron.
Figure 1. Schematic representation of a general pulse-coupled neuron.
Sustainability 15 00639 g001
Figure 2. Schematic representation of a modified pulse-coupled neuron.
Figure 2. Schematic representation of a modified pulse-coupled neuron.
Sustainability 15 00639 g002
Figure 3. Real-world infrared images: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Figure 3. Real-world infrared images: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Sustainability 15 00639 g003
Figure 4. Segmentation results of OTSU method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Figure 4. Segmentation results of OTSU method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Sustainability 15 00639 g004
Figure 5. Segmentation results of MST method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Figure 5. Segmentation results of MST method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Sustainability 15 00639 g005
Figure 6. Segmentation results of FCM method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Figure 6. Segmentation results of FCM method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Sustainability 15 00639 g006
Figure 7. Segmentation results of Mean-shift method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Figure 7. Segmentation results of Mean-shift method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Sustainability 15 00639 g007
Figure 8. Segmentation results of the proposed method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Figure 8. Segmentation results of the proposed method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Sustainability 15 00639 g008
Figure 9. The gradient result of infrared images: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Figure 9. The gradient result of infrared images: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Sustainability 15 00639 g009
Figure 10. Similarity of MST method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Figure 10. Similarity of MST method: (a) image 1; (b) image 2; (c) image 3; (d) image 4.
Sustainability 15 00639 g010
Table 1. Thresholding of MST and OTSU.
Table 1. Thresholding of MST and OTSU.
Image 1Image 2Image 3Image 4
OTSU21593157
MST15452241
Table 2. Mean value of final results.
Table 2. Mean value of final results.
Mean ValueImage 1Image 2Image 3Image 4
FCMv11.060321.051.017305.7834
v241.349096.844753.884831.1222
Proposed methodv114.156066.251230.486014.9498
v2132.1046132.1663145.4363129.3911
Table 3. Evaluation of similarity.
Table 3. Evaluation of similarity.
Image 1Image 2Image 3Image 4
MST0.51200.38720.34330.4815
Proposed0.34300.24190.24450.3515
Table 4. Evaluation of time complexity.
Table 4. Evaluation of time complexity.
Image 1Image 2Image 3Image 4
OTSU0.14020.00220.10760.0042
MST3.78020.20073.41170.9979
FCM5.63570.901013.61037.7979
Meanshift5.54140.28707.52251.1907
Proposed3.21330.12523.22800.3673
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tong, J.; Zhang, X.; Cai, C.; He, Z.; Tan, Y.; Chen, Z. Transmission Line Equipment Infrared Diagnosis Using an Improved Pulse-Coupled Neural Network. Sustainability 2023, 15, 639. https://doi.org/10.3390/su15010639

AMA Style

Tong J, Zhang X, Cai C, He Z, Tan Y, Chen Z. Transmission Line Equipment Infrared Diagnosis Using an Improved Pulse-Coupled Neural Network. Sustainability. 2023; 15(1):639. https://doi.org/10.3390/su15010639

Chicago/Turabian Style

Tong, Jie, Xiangquan Zhang, Changyu Cai, Zhouqiang He, Yuanpeng Tan, and Zhao Chen. 2023. "Transmission Line Equipment Infrared Diagnosis Using an Improved Pulse-Coupled Neural Network" Sustainability 15, no. 1: 639. https://doi.org/10.3390/su15010639

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop