In
Section 2.1, the parameter
of the AC equation primarily controls the width of the interface. In
Figure 1, the results of comparing the solution
u of the equation with the original sequence
for different values of
are shown, and the effect of different
values on the model is examined. Specifically,
Figure 1a displays the numerical results of comparing
u with the original sequence
for
, and
Figure 1b shows the results of comparing
u with the original
for
. The results show that the choice of
value has a significant effect on the solution of the AC equation. A smaller value of
will capture a steeper interface, while a larger value of
will result in a smoother solution but may lose some fine structural features. Therefore, choosing the right
value is important for the accuracy of the model.
The significance of the number of iterations is in controlling the iterative process of the time step. Specifically, we update
u incrementally by looping through multiple iterations to solve the partial differential equation. Each iteration corresponds to a time step, and through these iterations, we can observe how
u changes over time. In
Figure 2, the results of comparing the solution
u of the equation with the original sequence
for different numbers of iterations are shown. Specifically,
Figure 2a–d show the results of comparing the solution
u of the equation with the original sequence
when the number of iterations is 4, 8, 16, and 32, respectively. When the number of iterations is reduced from 32 to 4, it can be seen that the solution of the equation is closer to the original sequence, which indicates that the choice of the number of iterations has a significant effect on the solution of the AC equation.
For the sliding window mentioned in
Section 2.2, a legend is used here to explain the operation mechanism of the sliding window in detail, as shown in
Figure 3. We process a
pixel image with
blue sliding windows, as shown in
Figure 3a. To ensure that each window is completely inside the image, the center of the window starts at a position 3 pixels from the edge of the image (calculated from the half-width height
), starts at position (3, 3), and ends at the same position 3 pixels from the edge before it reaches the other side, up to position (12, 12). The
blue sliding window in
Figure 3a moves to the right and down in 1-pixel increments, covering the entire processable area. Since the space on each side is reduced by 3 pixels to ensure that the window falls entirely inside the image, the size of the processed image becomes
pixels. The output image has
pixels, as shown in
Figure 3b, where each pixel value represents the result of the computation of a
window at the corresponding position in the original image.
The choice of the fractal dimension
q is also essential when using the proposed method for image segmentation. A range of
q values are explored, as shown in
Figure 4.
Figure 4a is the original image, and
Figure 4b–f are the segmentation effects under different values of
q, respectively. By comparison, our method with different
q values shows significant differences in image segmentation effects. Specifically,
Figure 4b–d, with
q values of −5, −2, −1, and 1, respectively, show that the edge detection is not clear enough and the boundaries of some regions are blurred.
Figure 4f, with
, shows the clearest edge detection results, with well-defined boundaries of individual shapes and no obvious noise interference, while maintaining the detailed information of the image. Therefore, it can be concluded that the method has the best image segmentation effect when
and can capture the boundary information of the target region more accurately. In the following experiments, the window size
is set to 7 and
q is set to 5, unchanged, unless otherwise noted.
3.1. Feasibility Experiments of MF-AC-DFA
This subsection demonstrates the feasibility of the proposed method by altering the geometry’s shape, the image’s background color, and its resolution.
First, three geometric shapes (circle, triangle, square) are drawn on the same background, each with a resolution of
, as shown in
Figure 5a–c. The best segmentation results for the three shapes are shown in
Figure 5d–f, which demonstrates complete segmentation for each shape.
Table 1 lists the specific
coefficient values. The
coefficients for circle, triangle, and square images are 0.9853, 0.9524, and 0.9696, respectively. These values, near 1, indicate segmentation results close to reality with minor variations. The algorithm demonstrates stability across the three shapes but exhibits varying sensitivity to each shape.
Next, the proposed algorithm’s performance under different color backgrounds is evaluated using three
images with red, green, and blue backgrounds, as shown in
Figure 6a–c. The best segmentation results are shown in
Figure 6d–f. Visually, the segmentation results are very close to the original graphs, which indicates that the algorithm can effectively separate black circular targets from backgrounds of various colors.
Table 2 provides a more specific quantitative evaluation. For the red background, the
coefficient is 0.9815, for the green background, the
coefficient is 0.9803, and for the blue background, the
coefficient is also 0.9803. These values are all very close to 1, which implies that the segmentation results are highly consistent with the real situation and proves that the algorithm is highly accurate and stable in dealing with these three background colors.
Further, six background images with varying colors and different English letters are selected for further testing.
Figure 7a–f show the different background colors, which are red, orange, yellow, green, blue, and purple. They all have a resolution of
. Optimal segmentation results, achieved by adjusting constraints, are presented in
Figure 7g–l. Visually, the English letters are segmented effectively, irrespective of the background color. From the qualitative point of view, according to
Table 3, the
coefficients are 0.9677 for the red background, 0.9630 for the orange background, 0.9593 for the yellow background, 0.9365 for the green background, and 0.9671 for the blue background, and the highest value of 0.9728 is reached for the purple background. These values show that the algorithm can effectively and accurately segment the target from different color backgrounds. Except for the green background containing the letter G, all other background colors have a dice coefficient of about 0.96. This further proves that the proposed algorithm is still effective with different-colored backgrounds. However, the algorithm is more sensitive to different target shapes.
Finally, consider whether our proposed algorithm is feasible on images with different resolutions. The red, green, and blue background images from the previous experiment were used; see
Figure 8a,e,i. Then, three different resolutions were chosen for the experiment.
Figure 8b,f,j are all images with
resolution,
Figure 8c,g,k are all images with
resolution, and
Figure 8d,h,l are all images with
resolution. Visually, the match between the segmentation result and the original graphic increases gradually as the resolution increases, especially at the highest resolution of
, where the segmentation effect is most ideal. Querying
Table 4 gives the
coefficients for different background colors at different resolutions. For the red background, the
coefficient is 0.8633 when the resolution is
; when the resolution is
, the
coefficient is elevated to 0.9443; and the
coefficient reaches the highest 0.9677 at the resolution of
. Similarly, for the green background, the
coefficients are 0.8358, 0.8930, and 0.9365, while for the blue background, the
coefficients are 0.9137, 0.9341, and 0.9671, respectively. These values indicate that the segmentation accuracy increases significantly with increasing resolution and shows this trend for all three background colors.
Overall, the proposed segmentation algorithm is not only robust to background color changes and target shape changes but also well adapted to changes in image resolution. Although the segmentation effect is slightly inferior at lower resolutions, the algorithm can achieve more accurate segmentation as the resolution increases. Therefore, the proposed segmentation method is widely feasible.
3.2. Superiority Experiments of MF-AC-DFA
In this subsection, the differences between the proposed MF-AC-DFA method and the gradient method for image segmentation are compared through a series of experiments.
Figure 9 shows the comparison results of the two segmentation algorithms in geometric and real scenes. Specifically,
Figure 9a shows a geometric house image and
Figure 9d shows a real house image. To evaluate the performance of the different methods, this experiment applied the gradient method and the MF-AC-DFA method for segmentation, respectively. The computational results are displayed in
Figure 9b,c,e,f. For geometric house images, no matter which method is used, good contour results are obtained, as shown in
Figure 9b,c. In the segmentation experiments of real house images, by comparing
Figure 9e,f, it can be seen that the gradient method does not perform as satisfactorily as using MF-AC-DFA. Since the real scene contains more complex details and noise, the gradient method is easily disturbed during the segmentation process, which results in a lack of clarity of the segmentation boundary and the phenomenon of missing information in some regions. The MF-AC-DFA method, on the other hand, can obtain the details of the house effectively and shows stronger accuracy.
Further, a leaf image with more details is chosen next to evaluate the performance of the two segmentation methods.
Figure 10a shows an original image of a leaf with complex texture and irregular edges, which puts high demands on the segmentation algorithm.
Figure 10b,c shows the results of the segmentation of this leaf image using the gradient method and MF-AC-DFA method, respectively. As can be seen from
Figure 10b, the gradient method has some limitations in processing the leaf image. Although it can roughly outline the blade, it does not perform well in the internal detail part of the blade. In contrast, the MF-AC-DFA method shows significant advantages in the segmentation of leaf images. As shown in
Figure 10c, the method not only successfully extracts the overall contour of the leaf blade but also performs well in the internal details of the leaf. The texture and vein structure on the surface of the blade are better preserved, and the segmentation results are more fine and realistic. In summary, there is little difference between the two segmentation methods when targeting simple segmentation goals. However, when dealing with images with more details, especially through the segmentation experiments of leaf images, it can be concluded that the MF-AC-DFA method shows significant advantages. It outperforms the traditional gradient method in terms of detail retention and vein recognition. This indicates that the MF-AC-DFA method has strong accuracy compared to the gradient method and is suitable for many types of image segmentation tasks, especially in natural scenes with application potential.
3.3. Applicability Experiments of MF-AC-DFA
The sagging effect of a wire is the arc formed by the natural sagging of the wire between two fixed points. Excessive sag may result in insufficient distance between the wire and the ground, buildings, or other obstacles, which may cause short circuits, electric shock accidents, and even injuries or property damage [
25]. The variation in arc sag on electric wires is a complex phenomenon that is influenced by several factors. It has been shown that an increase in temperature leads to thermal expansion of the conductor, which increases the arc sag [
26]. In addition, the plastic elongation of the material caused by the aging of the conductor also leads to a gradual increase in the arc sag [
27]. The image segmentation technology can separate the wires from different backgrounds, which is conducive to calculating the arc droop of the wires, facilitating the calculation of the arc droop, and combining the temperature, aging degree, and other data for comprehensive analysis, so as to effectively assess the safety of the electric wire. Therefore, it is necessary to perform a segmentation of wire diagrams.
In this subsection, the focus is on evaluating the effectiveness of the proposed algorithm for wire image segmentation under different backgrounds. To fully validate its performance, experiments are conducted in two scenarios, simple background and complex background, respectively. The target images used for segmentation in the following experiments were taken from a paper by Zanella et al. [
28]
First, two images of wires with simple backgrounds are sought, as shown in
Figure 11a,b. The final image segmentation results are shown in
Figure 11c,d. As can be seen from the results in
Figure 11c, this experiment successfully separates the three parts of this electric wire from the background, i.e., the three parts of silicone tubing, insulating tape, and pins are identified. The segmentation result illustrated in
Figure 11d yields the three components of the wire as well as the screw in the background, and our method overall yields satisfactory segmentation results. These two segmentation results of the electric wire with simple backgrounds show that the proposed MF-AC-DFA algorithm is effective.
It is important to note that the background of the wire image is artificially controllable. Nevertheless, it is necessary to consider the segmentation of wires in very complex backgrounds.
Figure 12 illustrates the overall segmentation results. First, we apply the MF-AC-DFA method for initial segmentation.
Figure 12c,g show that although MF-AC-DFA can extract the wire contour completely, there is still noise in the background, which affects the purity of the segmentation results. To further optimize the segmentation effect, consider introducing morphological operations as a postprocessing step. By first performing MF-AC-DFA and then combining with morphological operations, very ideal segmentation results are finally obtained; see
Figure 12d,h. To ensure the scientific validity of the experiments, the direct use of morphological manipulation is also tested. However, as shown in
Figure 12b,f, the method fails to achieve the desired results, and its performance is not as good as using MF-AC-DFA alone. This suggests that it is difficult to efficiently separate wires from the background by relying only on morphological operations in complex backgrounds, whereas MF-AC-DFA can more accurately capture the wire feature information.
In conclusion, the image segmentation algorithm used in this experiment shows good segmentation results under simple background conditions for both single- and two-wire images by effectively extracting the edge information of the wires. When facing very complex backgrounds, although the complete wire contour can be segmented, the background noise still exists, and the performance of the algorithm will be affected to some extent. However, after the proposed algorithm is combined with morphological operations, excellent results are obtained.
By conducting image segmentation experiments of transmission towers with different shooting perspectives and under different weather, the overall structure of the tower can be separated from these backgrounds, which is the basis for recognizing the tower and its surroundings and also an important prerequisite for condition monitoring. The aging and corrosion of transmission towers often first manifests itself in changes in appearance, such as the deformation of the tower body and surface corrosion. These changes will be directly reflected in the profile. By segmenting the images of transmission towers, it is possible not only to accurately identify transmission towers and their surroundings but also to monitor the status of transmission towers. As a result, we predict the possible aging and corrosion problems of the transmission tower and develop a maintenance plan accordingly. This helps to extend the service life of transmission towers and reduce maintenance costs.
The following section focuses on testing the segmentation effect of the proposed algorithm on transmission tower images under different shooting viewpoints and different meteorological conditions. Firstly, two typical viewpoints, elevation and top view, are selected for analysis. Then, weather factors such as rain, snow, haze, etc., also degrade the image quality [
29,
30,
31], thus affecting the accuracy and stability of the segmentation results. Thus, to comprehensively evaluate the performance of the algorithm, experiments were conducted under various conditions such as less cloudy and cloudy, light and no light, fog and no fog, etc., to verify the robustness and effectiveness of the proposed method in complex real-world scenes. The transmission tower images used for segmentation are from the Transmission Towers and Power Lines Aerial-image Dataset (TTPLA), an open-source aerial-image dataset focusing on the detection and segmentation of transmission towers and power lines. The project is powered by the InsCode AI Big Model developed by CSDN, Inc. (Beijing, China).
As shown in
Figure 13, image segmentation experiments have been performed on transmission tower images from both the elevated and overhead viewpoints.
Figure 13a shows an elevation viewpoint image and
Figure 13b shows a top viewpoint image.
Figure 13c,d show the segmentation results for the upward view and downward view images with different algorithm parameters, respectively. Despite the noise in the segmented image in the overhead view and the imperfect segmentation of the lower half of the transmission tower, the algorithm successfully segments the upper half and presents the main structure. This shows that the algorithm is adaptive under certain conditions and can maintain good performance in different shooting angles to accurately recognize the main structure.
Next, the performance of the proposed algorithm is evaluated under less and more cloudy backgrounds. As shown in
Figure 14, image segmentation experiments are conducted on transmission tower images under less cloudy and more cloudy weather, respectively. Specifically,
Figure 14a shows a less cloudy transmission tower image and
Figure 14b shows a more cloudy transmission tower image.
Figure 14c,d show the transmission tower segmentation results for less cloudy and more cloudy with different algorithm parameters. From the segmentation results, it can be seen that the algorithm can effectively recognize and separate the main structural parts of the transmission towers under both less cloudy and more cloudy backgrounds. Although the cloudy background increases the complexity of the image, the algorithm is still able to segment the main structure of the transmission tower, indicating that the proposed algorithm has strong robustness and adaptability.
Further, the segmentation results of the algorithm under different lighting conditions are explored. As shown in
Figure 15, image segmentation experiments were conducted on transmission tower images under unlit and lit conditions.
Figure 15a shows an image of a transmission tower with an unlit background and
Figure 15b shows an image with a lit background. Applying the segmentation algorithm with different parameters,
Figure 15c,d show the final segmentation results, respectively. The results show that the algorithm is effective in identifying and separating the main structural parts of the transmission tower under both the unlit and lit conditions. However, compared to
Figure 15d, the details of the wires in the lower part of the transmission tower in
Figure 15c are not clear enough, which may be due to the highlights caused by the illumination affecting the segmentation results. Although the lighting conditions increase the complexity of the image, the algorithm still clearly segments the main structure of the transmission tower, indicating that the algorithm is robust and adaptable.
Finally, the performance of our proposed algorithm is evaluated in fog-free and foggy backgrounds. As shown in
Figure 16, we conducted image segmentation experiments on transmission tower images under fog-free and foggy weather, respectively. Specifically,
Figure 16a shows an image of a transmission tower in a fog-free background and
Figure 16b shows an image in a foggy background. By applying our segmentation algorithm,
Figure 16c,d show the final segmentation results. The computational results show that the method can effectively recognize and separate the main structure of the transmission tower in both fog-free and foggy backgrounds. However, the foggy condition increases the complexity of the image, which results in the loss of some detailed information, for example, the thin line part in
Figure 16d is not as clear as that in
Figure 16c, and the part of the wire on the right side of the original image is lost in
Figure 16d. However, our algorithm is still able to accurately segment the main structure of the transmission tower, which shows its robustness.
Through the above experiments, the proposed algorithm is validated under a variety of shooting viewpoints and environmental conditions. The results show that the algorithm can effectively segment the main body of the transmission tower in both elevated and overhead views and can still clearly present the upper half of the structure even when there is noise and poor segmentation of the lower half of the tower in the overhead view. The algorithm maintains good segmentation performance in both cloudy and cloudy backgrounds; even in light conditions with some detail flaws, the overall body of the tower can still be accurately extracted. In addition, the algorithm also shows strong robustness in foggy and fog-free environments and can cope with the challenges of low visibility. In summary, this algorithm shows good adaptability and practicability in complex scenes and provides a reference for the practical application of transmission tower image segmentation.