DCMC-UNet: A Novel Segmentation Model for Carbon Traces in Oil-Immersed Transformers Improved with Dynamic Feature Fusion and Adaptive Illumination Enhancement
Abstract
1. Introduction
2. Transformer Internal Inspection Robot
2.1. Transformer Internal Inspection Robot Structure
2.2. Transformer Internal Inspection Robot Operation Terminal
3. Construction of Transformer Discharge Carbon Trace Defect Dataset and Image Preprocessing
3.1. Transformer Discharge Carbon Trace Acquisition Experimental Platform
- The needle–plate model (structure shown in Figure 3):
- Experimental Equipment:
3.2. Luminance Contrast Adaptive Enhancement Algorithm
3.3. Verification of Enhancement Effect of Transformer Internal Carbon Mark Defect Image
4. Improved U-Net-Based Discharge Carbon Trace Defect Segmentation Algorithm with Dynamic Feature Fusion
4.1. Enhanced Encoder–Decoder Architecture with Dynamic Feature Capture Mechanism
4.2. Enhanced Skip Connections and Neck Network with Multi-Level Context Fusion
4.3. Loss Function
5. Validation of Model Performance Improvement and Result Analysis
5.1. Environment and Configuration
5.2. Performance Evaluation Metrics
5.3. Analysis of the Impact of Image Enhancement on Defect Segmentation Performance
5.4. Ablation Experiment
- Baseline Model (U-Net): The original structure without any enhancements.
- +DDE: Incorporates the deformable encoder.
- +DFCM: Integrates both the deformable encoder and edge-aware decoder.
- +DFCM+CLFC: Further replaces skip connections with the cross-level attention fusion layer.
- DCMC-UNet (improved model): Adds the MAFA module to the neck, completing all proposed improvements.
- Compared to the baseline U-Net, the +DDE model significantly improves all metrics, validating the enhanced feature extraction capability of the deformable encoder.
- The +DFCM model (adding the EAD) further boosts performance across all indicators.
- Introducing the CLFC layer (+DFCM+CLFC) leads to additional gains.
- The complete DCMC-UNet (with MAFA) achieves optimal metrics, confirming the superiority of the proposed architecture.
5.5. Real-Time Performance Verification
5.6. Comparative Experiments with Different Models
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Castro, A.R.G.; Miranda, V. Knowledge Discovery in Neural Networks with Application to Transformer Failure Diagnosis. IEEE Trans. Power Syst. 2005, 20, 717–724. [Google Scholar] [CrossRef]
- Dervos, C.T.; Paraskevas, C.D.; Skafidas, P.D.; Stefanou, N. Dielectric Spectroscopy and Gas Chromatography Methods Applied on High-Voltage Transformer Oils. IEEE Trans. Dielectr. Electr. Insul. 2006, 13, 586–592. [Google Scholar] [CrossRef]
- Fu, Q.; Peng, L.; Li, L.; Li, S.; Chen, C. Improved Method for Detecting Methanol in Transformer Oil Based on Colorimetry with a Chemometric Method. IEEE Trans. Dielectr. Electr. Insul. 2019, 26, 95–100. [Google Scholar] [CrossRef]
- Emadifar, R.; Kalantari, N.T.; Behjat, V.; Najjar, R. Monitoring and Condition Assessment of Insulation Paper of Distribution Transformers With Novel Oil Spectroscopy Method. IEEE Trans. Dielectr. Electr. Insul. 2022, 29, 1904–1912. [Google Scholar] [CrossRef]
- Liu, W.; Wang, Z.; Liu, X.; Zeng, N.; Liu, Y.; Alsaadi, F.E. A Survey of Deep Neural Network Architectures and Their Applications. Neurocomputing 2017, 234, 11–26. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Ji, H.; Liu, X.; Zhang, J.; Liu, L. Spatial Localization of a Transformer Robot Based on Ultrasonic Signal Wavelet Decomposition and PHAT-β-γ Generalized Cross Correlation. Sensors 2024, 24, 1440. [Google Scholar] [CrossRef]
- Ji, H.; Cui, X.; Ren, W.; Liu, L.; Wang, W. Visual Inspection for Transformer Insulation Defects by a Patrol Robot Fish Based on Deep Learning. IET Sci. Meas. Technol. 2021, 15, 606–618. [Google Scholar] [CrossRef]
- Ji, H.; Han, P.; Li, J.; Liu, X.; Liu, L. Transformer Discharge Carbon-Trace Detection Based on Improved MSRCR Image-Enhancement Algorithm and YOLOv8 Model. Sensors 2024, 24, 4309. [Google Scholar] [CrossRef]
- Ji, H.; Cui, X.; Gao, Y.; Ge, X. 3-D Ultrasonic Localization of Transformer Patrol Robot Based on EMD and PHAT-β Algorithms. IEEE Trans. Instrum. Meas. 2021, 70, 9004810. [Google Scholar] [CrossRef]
- Ji, H.; Zheng, C.; Tang, Z.; Liu, X.; Liu, L. Spatial Localization of Transformer Inspection Robot Based on Adaptive Denoising and SCOT-β Generalized Cross-Correlation. Sensors 2024, 24, 4937. [Google Scholar] [CrossRef] [PubMed]
- Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef] [PubMed]
- Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. YOLACT Real-Time Instance Segmentation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV 2019), Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: New York, NY, USA, 2019; pp. 9156–9165. [Google Scholar]
- Wang, X.; Zhang, R.; Shen, C.; Kong, T.; Li, L. SOLO: A Simple Framework for Instance Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 8587–8601. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. ISBN 978-3-319-24573-7. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018. [Google Scholar] [CrossRef]
- Qin, X.; Zhang, Z.; Huang, C.; Dehghan, M.; Zaiane, O.R.; Jagersand, M. U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection. Pattern Recognit. 2020, 106, 107404. [Google Scholar] [CrossRef]
- Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color Balance and Fusion for Underwater Image Enhancement. IEEE Trans. Image Process. 2018, 27, 379–393. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A Fusion-Based Enhancing Method for Weakly Illuminated Images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
- Deng, H.; Zhao, H.; Zhang, H.; Liu, G. γ Radiation Image Enhancement Method Based on Non-Linear Mapping. IEEE Access 2022, 10, 106999–107009. [Google Scholar] [CrossRef]
- Veluchamy, M.; Subramani, B. Image Contrast and Color Enhancement Using Adaptive Gamma Correction and Histogram Equalization. Optik 2019, 183, 329–337. [Google Scholar] [CrossRef]
- Huang, S.-C.; Cheng, F.-C.; Chiu, Y.-S. Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution. IEEE Trans. Image Process. 2013, 22, 1032–1041. [Google Scholar] [CrossRef] [PubMed]
- Li, C.; Tang, S.; Yan, J.; Zhou, T. Low-Light Image Enhancement Based on Quasi-Symmetric Correction Functions by Fusion. Symmetry 2020, 12, 1561. [Google Scholar] [CrossRef]
- Reza, A.M. Realization of the Contrast Limited Adaptive Histogram Equalization (CLAHE) for Real-Time Image Enhancement. J. VLSI Signal Process.-Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable Convolutional Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar]
- Chen, T.; Xiao, J.; Hu, X.; Zhang, G.; Wang, S. Boundary-Guided Network for Camouflaged Object Detection. Knowl.-Based Syst. 2022, 248, 108901. [Google Scholar] [CrossRef]
- Tang, L.; Li, B. CLASS: Cross-Level Attention and Supervision for Salient Objects Detection. Asian Conf. Comput. Vis. 2020. [Google Scholar] [CrossRef]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef]
Algorithms | PSNR | MSE | SSIM | VIF |
---|---|---|---|---|
CLAHE | 27.57 | 113.63 | 0.74 | 0.93 |
Enhanced | 28.45 | 92.95 | 0.86 | 0.94 |
Rate | Accuracy | P | mIoU | Dice |
---|---|---|---|---|
Base | 89.02% | 74.45% | 64.51% | 76.01% |
[1,2,3,4] | 88.03% | 76.34% | 48.41% | 62.70% |
[1,2,9,13] | 90.40% | 72.95% | 63.25% | 75.14% |
[1,3,6,12] | 89.28% | 75.32% | 64.54% | 75.69% |
[1,6,12,18] | 89.62% | 77.22% | 66.02% | 76.84% |
[1,6,12,24] | 90.65% | 76.94% | 63.66% | 76.25% |
Model | Accuracy | P | mIoU | Dice |
---|---|---|---|---|
U-Net | 90.15% | 74.71% | 64.60% | 76.07% |
+DDE | 93.37% | 82.05% | 70.28% | 80.61% |
+DFCM | 94.35% | 83.63% | 74.83% | 83.63% |
+DFCM+CLFC | 95.10% | 82.95% | 76.98% | 85.37% |
DCMC-UNet | 95.92% | 85.68% | 78.64% | 86.94% |
Model | Time/s | FPS | GFLOPS | Params/M |
---|---|---|---|---|
U-Net | 0.020 | 49.86 | 31.40 | 17.26 |
DCMC-UNet | 0.013 | 76.31 | 34.68 | 20.15 |
Model | Accuracy (%) | Precision (%) | mIoU (%) | Dice (%) |
---|---|---|---|---|
DeepLabV3+ | 86.99 | 82.93 | 58.90 | 71.79 |
PSPNet | 89.21 | 83.93 | 67.29 | 78.35 |
U2Net | 89.58 | 71.86 | 65.18 | 76.11 |
U-Net++ | 91.35 | 77.94 | 65.58 | 77.13 |
UNet3 | 89.77 | 67.69 | 65.00 | 76.12 |
TransUNet | 88.58 | 63.97 | 61.19 | 72.93 |
SegFormer | 89.54 | 73.38 | 54.57 | 68.36 |
DCMC-UNet | 95.92 | 85.68 | 78.64 | 86.94 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ji, H.; Li, J.; Shi, Z.; Tang, Z.; Liu, X.; Han, P. DCMC-UNet: A Novel Segmentation Model for Carbon Traces in Oil-Immersed Transformers Improved with Dynamic Feature Fusion and Adaptive Illumination Enhancement. Sensors 2025, 25, 3904. https://doi.org/10.3390/s25133904
Ji H, Li J, Shi Z, Tang Z, Liu X, Han P. DCMC-UNet: A Novel Segmentation Model for Carbon Traces in Oil-Immersed Transformers Improved with Dynamic Feature Fusion and Adaptive Illumination Enhancement. Sensors. 2025; 25(13):3904. https://doi.org/10.3390/s25133904
Chicago/Turabian StyleJi, Hongxin, Jiaqi Li, Zhennan Shi, Zijian Tang, Xinghua Liu, and Peilin Han. 2025. "DCMC-UNet: A Novel Segmentation Model for Carbon Traces in Oil-Immersed Transformers Improved with Dynamic Feature Fusion and Adaptive Illumination Enhancement" Sensors 25, no. 13: 3904. https://doi.org/10.3390/s25133904
APA StyleJi, H., Li, J., Shi, Z., Tang, Z., Liu, X., & Han, P. (2025). DCMC-UNet: A Novel Segmentation Model for Carbon Traces in Oil-Immersed Transformers Improved with Dynamic Feature Fusion and Adaptive Illumination Enhancement. Sensors, 25(13), 3904. https://doi.org/10.3390/s25133904