Guided Filter-Based Edge Detection Algorithm for ICT Images of Solid Rocket Motor Propellant

: As the nondestructive testing method based on industrial computerized tomography (ICT) is widely used in solid rocket motor (SRM) propellant defect detection, the demand for a corresponding image processing algorithm is increasing. In order to extract better defect information on SRM propellants, we studied the edge detection algorithm for their ICT images. This paper proposes a guided ﬁlter-based edge detection algorithm for ICT images of SRM propellants with much noise. The algorithm innovatively uses guided ﬁlters to converge the detection results of type I edges with good edge continuity to type II edges with clear positioning. The obtained type III edges have good edge continuity and clear positioning. The experimental results show that the proposed algorithm can achieve edge detection effectively.


Introduction
Solid rocket motors (SRMs) serve as power devices for some aircraft and various long-range rockets. The structural integrity of SRM propellant has always been the focus of attention. Cracks, slags, and bubbles are easily generated during propellant production, while cracks and bubbles are easily generated during storage and transportation [1]. Since all these defects seriously affect the durability and reliability of SRMs, determining their type, location, and size in the propellant is important for structural integrity analysis. Industrial computerized tomography (ICT) has been applied in the nondestructive testing of SRM propellant as it provides clear cross-section images [2].
As the edges in ICT images of SRM propellant contain a wealth of defect information, high-accuracy edge detection constitutes a prerequisite for defect analysis. The propellant images produced by ICT inevitably contain noise, whose sources can be classified into three categories: (1) the quantum noise generated by the X-ray beam; (2) the inherent physical noise of ICT systems; (3) the noise generated by reconstruction algorithms [3]. Achieving good edge detection under intensive noise interference is an urgent problem for SRM safety evaluation.
Edge detection methods have matured over the years and can be broadly divided into traditional and deep learning-based models. Traditional edge detection methods are mainly based on the operational implementation of the spatial and frequency domains of the images, which include the Sobel detector [4], the Canny detector [5], and the wavelet transform-based detector [6]. Despite their advantages in noise reduction, single-pixel response, and edge continuity, not all of these can be taken into consideration. Edge detection methods based on deep learning include holistically-nested edge detection (HED) [7], CSCNN [8], DeepEdge [9], and DeepContour [10]. Since convolutional neural networks (CNNs) can automatically extract image features after training [11,12], edge detection methods based on deep learning have become a research hotspot in recent years. However, CNN-based edge detection methods often have low detection speeds ranging from several seconds to a few hours [13], even with the help of modern GPUs. The large data volume of SRM propellant ICT images poses a challenge for the training of CNNs [14]. Deep learning-based edge detection methods often employ multi-scale and multi-layer feature learning to extract more comprehensive edge information and combine the outputs from multiple scales through training on the last fusion layer [15]. As a result, the single-pixel response at the edge of the image becomes difficult. Hence, the existing edge detection algorithms are not well adapted to the SRM propellant ICT images.
The literature [16] has proven the feasibility of enhancing the edge detection results using image fusion. However, simple image fusion can not achieve the enhancements due to much noise in the ICT images of the propellant. To solve this problem, we innovated an image fusion method based on guided filters for edge detection in propellant ICT images. Unlike existing methods, the edge detection results in the spatial and frequency domains can be fused by guided filters, which combines their advantages. The proposed method was compared with traditional edge detection methods to prove its superiority, and its feasibility was verified with the detection of various defects in the propellant.
The rest of this paper is organized as follows. The problem description is presented in Section 2. Section 3 presents the proposed guided filter framework. Section 4 presents the experimental results. Section 5 concludes this paper.

Problem Description
Edges in an image are areas of pixel gray value mutation [17], where the derivatives increase. Thus, traditional edge detection algorithms are mainly implemented by computing the first and second derivatives of the image [18]. The derivative of the pixel gray value function is often represented by the adjacent pixel differences since the image is a discrete signal: where f(x,y) is a function representing the digital image. Templates for various edge detection operators have been derived based on this principle. Image gradient calculation is achieved by deconvoluting the image with the template of an edge detection operator. As derivative calculations are highly susceptible to noise, filters must be applied to reduce the noise. Thus the contradiction arises: noise reduction is achieved while the marginal characteristics are maintained. This contradiction is more prominent in propellant ICT images with noise. The ideal edges in the ICT images of SRM propellant should be continuous, occupying as few pixels as possible and free from image background noise interference. The noise in ICT images of propellant only exists in the image texture. Therefore, the edge information is not disturbed even though the images are defined as noisy. Pursuing edge continuity contradicts the single-pixel point responses for edges, which essentially contradicts the segmentation of noise and edges in image texture. Less noise suppression in the image texture results in good edge continuity but more edge pixel spots, while complete noise suppression leads to poor edge continuity but fewer edge pixel spots, as shown in Figure 1a-d.
The above issue can be solved to some extent by two-dimensional wavelet decomposition [19]. As edges and noise are high-frequency parts of the image, segmentation of the two sections at high frequencies is mainly achieved by Lipschitz index changes during image decomposition. The Lipschitz index is often used to measure the singularity resulting from a mutation point on the signal. The singularity in images is mainly reflected in the edges and noise. The Lipschitz index can be approximated by the magnitude of the wavelet coefficients, which can be expressed as the following equation during image decomposition: where f(x) has a vanishing moment of order n; α is the Lipschitz index; j is the decomposition scale; K is a constant. The above issue can be solved to some extent by two-dimensional wavelet decomposition [19]. As edges and noise are high-frequency parts of the image, segmentation of the two sections at high frequencies is mainly achieved by Lipschitz index changes during image decomposition. The Lipschitz index is often used to measure the singularity resulting from a mutation point on the signal. The singularity in images is mainly reflected in the edges and noise. The Lipschitz index can be approximated by the magnitude of the wavelet coefficients, which can be expressed as the following equation during image decomposition:

2
(2) where f(x) has a vanishing moment of order n; α is the Lipschitz index; j is the decomposition scale; K is a constant.
According to Equation (2), the mode maxima of the wavelet coefficients increase continuously with the increase of decomposition scale j if α is above 0. Otherwise, the mode maxima of wavelet coefficients gradually decrease with the increasing decomposition scale j. In general, the Lipschitz index of the useful signal α is greater than 0, indicating that the noise in the high-frequency part of the image can be removed by increasing the decomposition scale j, and the edge extraction is achieved while preserving the remaining high-frequency parts.
Thus the segmentation of edges and noise in high frequencies is enabled in practical processing based on thresholding operations to the mode maxima of the wavelet coefficients [20]. As a result, the edges and noise are perfectly segmented, but the continuity of the extracted edges is poor. The edge detection results based on wavelet mode maxima are shown in Figure 1e.
According to the above discussion, one single traditional edge detection method is insufficient to achieve ideal edge detection. However, some edge detection requirements are well satisfied, such as good edge continuity and the single-pixel response of the edge. According to Equation (2), the mode maxima of the wavelet coefficients increase continuously with the increase of decomposition scale j if α is above 0. Otherwise, the mode maxima of wavelet coefficients gradually decrease with the increasing decomposition scale j. In general, the Lipschitz index of the useful signal α is greater than 0, indicating that the noise in the high-frequency part of the image can be removed by increasing the decomposition scale j, and the edge extraction is achieved while preserving the remaining high-frequency parts.
Thus the segmentation of edges and noise in high frequencies is enabled in practical processing based on thresholding operations to the mode maxima of the wavelet coefficients [20]. As a result, the edges and noise are perfectly segmented, but the continuity of the extracted edges is poor. The edge detection results based on wavelet mode maxima are shown in Figure 1e.
According to the above discussion, one single traditional edge detection method is insufficient to achieve ideal edge detection. However, some edge detection requirements are well satisfied, such as good edge continuity and the single-pixel response of the edge.
The detected edges with good continuity, occupying more pixel points, and having noise in the image texture are defined as type I, while the edges with single-pixel responses occupying a narrow line of poorly contiguous pixel spots are defined as type II.

The Proposed Algorithm Based on the Guided Filter
In this paper, we propose to converge type I edges into type II edges using a guided filter, thereby achieving low-cost and efficient edge detection for SRM propellant. The edge continuity obtained by morphology algorithms in edge detection is the best among the numerous edge detection algorithms, which use structural operators to corrode, expand, open, and close images to obtain edges. Type I edges were obtained using a modified morphological method described in the literature [21], as shown in Figure 1f. The edges detected by the wavelet modulus maxima method qualified as type II edges. Thus, we selected this method to acquire type II edges. A guided filter is essentially a local linear model requiring the input of the guide image I and the original image p and outputting the filtered image q [22]. The guided filter-based algorithm flow is shown in Figure 2.
In this paper, we propose to converge type I edges into type II edges using a guided filter, thereby achieving low-cost and efficient edge detection for SRM propellant. The edge continuity obtained by morphology algorithms in edge detection is the best among the numerous edge detection algorithms, which use structural operators to corrode, expand, open, and close images to obtain edges. Type I edges were obtained using a modified morphological method described in the literature [21], as shown in Figure 1f. The edges detected by the wavelet modulus maxima method qualified as type II edges. Thus, we selected this method to acquire type II edges.
A guided filter is essentially a local linear model requiring the input of the guide image I and the original image p and outputting the filtered image q [22]. The guided filterbased algorithm flow is shown in Figure 2. Its basic assumption can be described as: where Ii and qi represent the values corresponding to pixel point i in the guided and filtered images, ak and bk are the linear transformation coefficients in a square window now denoted as wk with center k and radius r. ak and bk are calculated as follows: where and are the mean and variance of guided image I in window .
represents the number of pixels in the window. = 1 ∑ ∈ represents the mean of the original input image p in . We can calculate the transformed in window with Equation (1) after evaluating and . Since multiple windows covered pixel point I, averaging was required to obtain a final : Its basic assumption can be described as: where I i and q i represent the values corresponding to pixel point i in the guided and filtered images, a k and b k are the linear transformation coefficients in a square window now denoted as w k with center k and radius r. a k and b k are calculated as follows: where µ k and σ 2 k are the mean and variance of guided image I in window w k . n w represents the number of pixels in the window. p k = 1 n w ∑ i∈w k p i represents the mean of the original input image p in w k . We can calculate the transformed q i in window w k with Equation (1) after evaluating a k and b k . Since multiple windows covered pixel point I, averaging was required to obtain a final q i : a k and b k are actually calculated with the optimization goal of keeping the outputs q and I approximately the same. The optimal convergence of type I edges to type II edges can be achieved using the guided filter. As a result, the final edges have both continuity and single-pixel response. The edges after convergence are defined as type III.
We graded the gray-scale values of the edges in the images to better illustrate the convergence of type I edges to type II edges. The thresholding method is adopted to process type II edges. The gray-scale value of the point is set to 1 when exceeding a threshold and 0 otherwise. Thus, only the gray-scale of type I edges is necessary to be graded. The gray-scale contour plot of type I edges is shown in Figure 3. can be achieved using the guided filter. As a result, the final edges have both continuity and single-pixel response. The edges after convergence are defined as type III.
We graded the gray-scale values of the edges in the images to better illustrate the convergence of type I edges to type II edges. The thresholding method is adopted to process type II edges. The gray-scale value of the point is set to 1 when exceeding a threshold and 0 otherwise. Thus, only the gray-scale of type I edges is necessary to be graded. The gray-scale contour plot of type I edges is shown in Figure 3. As shown in Figure 3, the changes in gray levels are small, ranging from 0.2 to 0.4. Thus, the gray levels of type I edges can be divided into 3 ranges to serve as the criteria for grading the gray levels of the pixel spots in Figure 4. The procedures for the guided filter to converge type I edges to type II edges are shown in Figure 4. After processing with the guided filter, the pixel spot width of type III edges decreased significantly, and the background noise interference was reduced. The edge discontinuity due to the thresholding operation is also addressed in type II edges. Thus, the proposed guided filter framework is completed, and the experimental results can be presented next. As shown in Figure 3, the changes in gray levels are small, ranging from 0.2 to 0.4. Thus, the gray levels of type I edges can be divided into 3 ranges to serve as the criteria for grading the gray levels of the pixel spots in Figure 4. The procedures for the guided filter to converge type I edges to type II edges are shown in Figure 4. After processing with the guided filter, the pixel spot width of type III edges decreased significantly, and the background noise interference was reduced. The edge discontinuity due to the thresholding operation is also addressed in type II edges. Thus, the proposed guided filter framework is completed, and the experimental results can be presented next.

Experimental Results
This section presents the experimental results from real experimental datasets. To evaluate the proposed algorithm based on a guided filter, we enlarged the defect sections of the propellant ICT images. The simulation software is MATLAB 2016a, and the running environment is the Windows 10 operating system with a 64-bit processor.
ICT images of propellant with typical defects are shown in Figure 5.

Experimental Results
This section presents the experimental results from real experimental datasets. To evaluate the proposed algorithm based on a guided filter, we enlarged the defect sections of the propellant ICT images. The simulation software is MATLAB 2016a, and the running environment is the Windows 10 operating system with a 64-bit processor.
ICT images of propellant with typical defects are shown in Figure 5.

Experimental Results
This section presents the experimental results from real experimental datasets. To evaluate the proposed algorithm based on a guided filter, we enlarged the defect sections of the propellant ICT images. The simulation software is MATLAB 2016a, and the running environment is the Windows 10 operating system with a 64-bit processor.
ICT images of propellant with typical defects are shown in Figure 5. Type I, II, and III edges of typical defects are presented in Figure 6, where (a), (b), and (c) shows type I, II, and III edges of crack II; (d), (e), and (f) shows type I, II, and III edges of Bubble I; (g), (h), and (i) shows type I, II, and III edges of Slag I. The quality of type III edges with each type of defect is higher than that of type I and II edges, which aligns with the desired goal of better continuity and better noise reduction. Type I, II, and III edges of typical defects are presented in Figure 6, where (a), (b), and (c) shows type I, II, and III edges of crack II; (d), (e), and (f) shows type I, II, and III edges of Bubble I; (g), (h), and (i) shows type I, II, and III edges of Slag I. The quality of type III edges with each type of defect is higher than that of type I and II edges, which aligns with the desired goal of better continuity and better noise reduction.
To assess the proposed algorithm more objectively, PSNR [23] and SSIM [24] of each image after edge detection was calculated as an edge detection quality evaluation index. The PSNR and SSIM values of three different images with a single defect are listed in Tables 1 and 2. The three images with a single defect are shown in Figure 7. According to Table 1, Edge I has the largest PSNR value, and Edge III has a greater PSNR value than Edge II. The reason is that the PSNR value is calculated based on the error of the corresponding pixel point. However, Edge I still has noise, which is more similar to the original image. Thus, the higher PSNR value of Edge I does not indicate a higher edge detection quality. For Edge II and Edge III, the noise is filtered out in the edge detection results. The higher PSNR indicates that the edges are closer to the ones in the original image. The results suggest that Edge III is better than Edge II. SSIM measures the similarity of two images from the aspects of brightness, contrast, and structure. In edge detection, the main factor affecting SSIM is structure. According to Table 2, Edge III has the largest SSIM value, i.e., the best structural similarity. To assess the proposed algorithm more objectively, PSNR [23] and SSIM [24] of each image after edge detection was calculated as an edge detection quality evaluation index. The PSNR and SSIM values of three different images with a single defect are listed in Tables 1 and 2. The three images with a single defect are shown in Figure 7.   To further verify the effectiveness of the algorithm proposed in this paper, 10 images were randomly selected from a database of 50 images. It is shown in Figure 8. The edge detection effect of Edge III for these ten images is also added in Figure 8. The PSNR and SSIM values were calculated for the three types of edges of these ten images. They are shown in Tables 3 and 4. The diagonal upward arrows in the table indicate that the value is increasing compared to the previous type of edge, while the diagonal downward arrows indicate that the value is decreasing. The variation of PSNR values in Table 2 is different from that in Table 1 because the image in Table 1 is a quarter of the image, and the interference of noise is greater than that of the whole image. So the PSNR value of Edge I in Table 1 is larger.  6.5203 6.5430↗ 6.5463↗   As can be seen in Tables 3 and 4, the values of PSNR and SSIM for Edge II are larger than Edge I, and the values of PSNR and SSIM for Edge III are greater than Edge II. Therefore, it can be demonstrated that the proposed algorithm is able to obtain higher quality edges.
Edge III is obtained by converging Edge I to Edge II through the guided filter, so the computational effort is bound to increase. The operation time of the algorithm can be used to characterize its computation. By counting the operation time of the above ten images, the operation time of each type edge is obtained as shown in Table 5. Take the average of the operation time for each type edge of ten images. Figure 9 shows the average time for each type edge as a percentage of the total time.  As can be seen in Figure 9, there is little difference in the computation time between Edge I and Edge II. Compared to Edge I and Edge II, the increased computational volume to obtain Edge III and the total time spent is about 2.45 times and 4.53 s. Considering the fact that the process of manual analysis of ICT images takes much longer in practical applications, the increase in computational volume is within an acceptable range. The Edge III is of higher quality and more in line with the needs of practical applications.

Conclusions
This paper successfully proved that the guided filter is a very competitive image fusion method in edge detection. As a new edge detection method, the guided filter-based algorithm had more advantages than other edge detection algorithms. Three experiments were conducted. According to the subjective evaluation and objective indexes, the edge information detected by the proposed method was clearer, and the edge continuity was better. Therefore, the proposed method could effectively meet the edge detection requirements for SRM propellant at the theory, algorithm, and experiment levels. The guided filter-based algorithm had great application value for edge information extraction in images with intensive noise.
Author Contributions: Writing-original draft preparation and Software, J.D.; Conceptualization, methodology, Z.X.; Resources, writing-review and editing, T.L. All authors have read and agreed to the published version of the manuscript. As can be seen in Figure 9, there is little difference in the computation time between Edge I and Edge II. Compared to Edge I and Edge II, the increased computational volume to obtain Edge III and the total time spent is about 2.45 times and 4.53 s. Considering the fact that the process of manual analysis of ICT images takes much longer in practical applications, the increase in computational volume is within an acceptable range. The Edge III is of higher quality and more in line with the needs of practical applications.

Conclusions
This paper successfully proved that the guided filter is a very competitive image fusion method in edge detection. As a new edge detection method, the guided filter-based algorithm had more advantages than other edge detection algorithms. Three experiments were conducted. According to the subjective evaluation and objective indexes, the edge information detected by the proposed method was clearer, and the edge continuity was better. Therefore, the proposed method could effectively meet the edge detection requirements for SRM propellant at the theory, algorithm, and experiment levels. The guided filter-based algorithm had great application value for edge information extraction in images with intensive noise.