Non-Contrast Brain CT Images Segmentation Enhancement: Lightweight Pre-Processing Model for Ultra-Early Ischemic Lesion Recognition and Segmentation
Abstract
1. Introduction
- We propose a lightweight combined neural network approach for preprocessing non-contrast computer tomography (NCCT) brain images to enhance the segmentation quality of the ischemic core and penumbra regions.
- We also introduce a lightweight, trainable image filtering module based on a learnable linear combination of convolutions with pretrained kernels of multiple sizes, integrated into a biomedical image preprocessing pipeline to enhance task performance.
2. Problem Statement
3. Materials and Methods
3.1. Image Preprocessing Module
- are the pretrained kernels from the UNet’s intermediate layers;
- are trainable coefficients with (normalized).
- Three representative kernel sizes (, , and ) were selected based on inter-slice spacing in CT volumes;
- The final combination weights were optimized jointly with the segmentation network.
3.2. Image Segmentation Module
3.3. Loss Function
3.4. Complete Pipeline Architecture Specification: Preprocessor + Segmenter
3.4.1. Lightweight Preprocessing Module
3.4.2. Downstream Segmenter: SwinUNet
3.4.3. End-to-End Pipeline Performance
3.4.4. Key Efficiency and Clinical Insights
4. Experiments, Results and Discussion
4.1. Datasets Description
4.2. Evaluation Metrics
4.3. Implementation Details
4.4. Evaluation Results and Analysis
4.4.1. Customized Block Ablation Study
4.4.2. Complete Segmentation Pipeline Performance
4.5. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- The Top 10 Causes of Death. 2024. Available online: https://www.who.int/news-room/fact-sheets/detail/the-top-10-causes-of-death (accessed on 21 June 2025).
- Cardiovascular Diseases. 2024. Available online: https://www.who.int/news-room/fact-sheets/detail/cardiovascular-diseases-(cvds) (accessed on 21 June 2025).
- Grover, V.P.; Tognarelli, J.M.; Crossey, M.M.; Cox, I.J.; Taylor-Robinson, S.D.; McPhail, M.J. Magnetic Resonance Imaging: Principles and Techniques: Lessons for Clinicians. J. Clin. Exp. Hepatol. 2015, 5, 246–255. [Google Scholar] [CrossRef]
- Hassan, R.; Sharis, S.; Mukari, S.A.; Hashim, H.; Sobri, M. Non-contrast Computed Tomography in Acute Ischaemic Stroke: A Pictorial Review. Med. J. Malays. 2013, 68, 93–100. [Google Scholar]
- Tu, X.; Li, X.; Zhu, H.; Kuang, X.; Si, X.; Zou, S.; Hao, S.; Huang, Y.; Xiao, J. Unilateral cerebral ischemia induces morphological changes in the layer V projection neurons of the contralateral hemisphere. Neurosci. Res. 2022, 182, 41–51. [Google Scholar] [CrossRef]
- Li, D.; Dharmawan, D.A.; Ng, B.P.; Rahardja, S. Residual U-Net for Retinal Vessel Segmentation. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 1425–1429. [Google Scholar] [CrossRef]
- Beheshti, N.; Johnsson, L. Squeeze U-Net: A Memory and Energy Efficient Image Segmentation Network. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1495–1504. [Google Scholar] [CrossRef]
- Siddique, N.; Sidike, P.; Reyes, A.; Alom, M.Z.; Devabhaktuni, V. Fractal, recurrent, and dense U-Net architectures with EfficientNet encoder for medical image segmentation. J. Med. Imaging 2022, 9, 064004. [Google Scholar] [CrossRef]
- Mu, N.; Lyu, Z.; Rezaeitaleshmahalleh, M.; Tang, J.; Jiang, J. An attention residual u-net with differential preprocessing and geometric postprocessing: Learning how to segment vasculature including intracranial aneurysms. Med. Image Anal. 2023, 84, 102697. [Google Scholar] [CrossRef]
- Chen, J.; Mei, J.; Li, X.; Lu, Y.; Yu, Q.; Wei, Q.; Luo, X.; Xie, Y.; Adeli, E.; Wang, Y.; et al. TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers. Med. Image Anal. 2024, 97, 103280. [Google Scholar] [CrossRef]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. arXiv 2021, arXiv:2105.05537. [Google Scholar] [CrossRef]
- Valanarasu, J.M.J.; Oza, P.; Hacihaliloglu, I.; Patel, V.M. Medical Transformer: Gated Axial-Attention for Medical Image Segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021, Proceedings of the 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part I; Springer: Cham, Switzerland, 2021; pp. 36–46. [Google Scholar]
- Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P.H.; et al. Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 6877–6886. [Google Scholar] [CrossRef]
- Hatamizadeh, A.; Yang, D.; Roth, H.R.; Xu, D. UNETR: Transformers for 3D Medical Image Segmentation. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 1748–1758. [Google Scholar]
- Ranftl, R.; Bochkovskiy, A.; Koltun, V. Vision Transformers for Dense Prediction. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 12159–12168. [Google Scholar] [CrossRef]
- Zhang, Y.; Liu, H.; Hu, Q. TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021, Proceedings of the 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part I; de Bruijne, M., Cattin, P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C., Eds.; Springer: Cham, Switzerland, 2021; pp. 14–24. [Google Scholar]
- Cheng, B.; Schwing, A.G.; Kirillov, A. Per-Pixel Classification is Not All You Need for Semantic Segmentation. Adv. Neural Inf. Process. Syst. 2021, 34, 17864–17875. [Google Scholar]
- Strudel, R.; Garcia, R.; Laptev, I.; Schmid, C. Segmenter: Transformer for Semantic Segmentation. arXiv 2021, arXiv:2105.05633. [Google Scholar] [CrossRef]
- Chen, B.; Liu, Y.; Zhang, Z.; Lu, G.; Kong, A.W.K. TransAttUnet: Multi-Level Attention-Guided U-Net with Transformer for Medical Image Segmentation. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 55–68. [Google Scholar] [CrossRef]
- Donnan, G.; Baron, J.C.; Davis, S.; Sharp, F. The ischemic penumbra: Overview, definition, and criteria. In The Ischemic Penumbra; CRC Press: Boca Raton, FL, USA, 2007; pp. 7–20. [Google Scholar]
- Nael, K.; Tadayon, E.; Wheelwright, D.; Metry, A.; Fifi, J.; Tuhrim, S.; De Leacy, R.; Doshi, A.; Chang, H.; Mocco, J. Defining Ischemic Core in Acute Ischemic Stroke Using CT Perfusion: A Multiparametric Bayesian-Based Model. Am. J. Neuroradiol. 2019, 40, 1491–1497. [Google Scholar] [CrossRef]
- Yoo, R.E.; Choi, S.H. Deep learning-based image enhancement techniques for fast MRI in neuroimaging. Magn. Reson. Med. Sci. 2024, 23, 341–351. [Google Scholar] [CrossRef] [PubMed]
- Ye, Z.; Luo, S.; Wang, L. Deep Learning Based Cystoscopy Image Enhancement. J. Endourol. 2024, 38, 962–968. [Google Scholar] [CrossRef] [PubMed]
- Barrett, J.; Keat, N. Artifacts in CT: Recognition and Avoidance. Radiographics 2004, 24, 1679–1691. [Google Scholar] [CrossRef]
- Al-Shakhrah, I.; Al-Obaidi, T. Common artifacts in computerized tomography: A review. Appl. Radiol. 2003, 32, 25–30. [Google Scholar]
- Zohair, A.A.; Shamil, A.A.; Sulong, G. Latest methods of image enhancement and restoration for computed tomography: A concise review. Appl. Med. Inform. 2015, 36, 1–12. [Google Scholar]
- Park, J.; Lee, J.Y.; Yoo, D.; Kweon, I.S. Distort-and-recover: Color enhancement using deep reinforcement learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5928–5936. [Google Scholar]
- Chen, Y.S.; Wang, Y.C.; Kao, M.H.; Chuang, Y.Y. Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6306–6314. [Google Scholar]
- Hu, Y.; He, H.; Xu, C.; Wang, B.; Lin, S. Exposure: A white-box photo post-processing framework. ACM Trans. Graph. (TOG) 2018, 37, 1–17. [Google Scholar] [CrossRef]
- Shan, C.; Zhang, Z.; Chen, Z. A coarse-to-fine framework for learned color enhancement with non-local attention. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 949–953. [Google Scholar]
- Yang, L.; Yao, S.; Chen, P.; Shen, M.; Fu, S.; Xing, J.; Xue, Y.; Chen, X.; Wen, X.; Zhao, Y.; et al. Unpaired fundus image enhancement based on constrained generative adversarial networks. J. Biophotonics 2024, 17, e202400168. [Google Scholar] [CrossRef]
- Tatanov, O.; Samarin, A. LFIEM: Lightweight Filter-based Image Enhancement Model. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 873–878. [Google Scholar]
- Samarin, A.; Nazarenko, A.; Savelev, A.; Toropov, A.; Dzestelova, A.; Mikhailova, E.; Motyko, A.; Malykh, V. A Model Based on Universal Filters for Image Color Correction. Pattern Recognit. Image Anal. 2024, 34, 844–854. [Google Scholar] [CrossRef]
- Samarin, A.; Nazarenko, A.; Toropov, A.; Kotenko, E.; Dzestelova, A.; Mikhailova, E.; Malykh, V.; Savelev, A.; Motyko, A. Universal Filter-Based Lightweight Image Enhancement Model with Unpaired Learning Mode. In Proceedings of the 2024 36th Conference of Open Innovations Association (FRUCT), Lappeenranta, Finland, 30 October–1 November 2024; pp. 711–720. [Google Scholar]
- Kosugi, S.; Yamasaki, T. Unpaired Image Enhancement Featuring Reinforcement-Learning-Controlled Image Editing Software. Proc. AAAI Conf. Artif. Intell. 2020, 34, 11296–11303. [Google Scholar] [CrossRef]
- Wang, Y.; Xu, T.; Fan, Z.; Xue, T.; Gu, J. AdaptiveISP: Learning an Adaptive Image Signal Processor for Object Detection. In Advances in Neural Information Processing Systems 37 (NeurIPS 2024); Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J., Zhang, C., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2024; Volume 37, pp. 112598–112623. [Google Scholar]
- Umerenkov, D.; Kudin, S.; Peksheva, M.; Pavlov, D. CPAISD: Core-penumbra acute ischemic stroke dataset. arXiv 2024, arXiv:2404.02518. [Google Scholar] [CrossRef]
- Mohamed, A.; Rabea, M.; Sameh, A.; Kamal, E. Brain Tumor Radiogenomic Classification. arXiv 2024, arXiv:2401.09471. [Google Scholar] [CrossRef]
- Chilamkurthy, S.; Ghosh, R.; Tanamala, S.; Biviji, M.; Campeau, N.G.; Venugopal, V.K.; Mahajan, V.; Rao, P.; Warier, P. Development and Validation of Deep Learning Algorithms for Detection of Critical Findings in Head CT Scans. arXiv 2018, arXiv:1803.05854. [Google Scholar] [CrossRef]
- Wang, Y.; Xu, T.; Fan, Z.; Xue, T.; Gu, J. AdaptiveISP: Learning an Adaptive Image Signal Processor for Object Detection. Adv. Neural Inf. Process. Syst. 2025, 37, 112598–112623. [Google Scholar]
- He, J.; Liu, Y.; Qiao, Y.; Dong, C. Conditional sequential modulation for efficient global image retouching. In Computer Vision—ECCV 2020, Proceedings of the 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XIII 16; Springer: Cham, Switzerland, 2020; pp. 679–695. [Google Scholar]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.I.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef] [PubMed]
- Narayan, S. The generalized sigmoid activation function: Competitive supervised learning. Inf. Sci. 1997, 99, 69–82. [Google Scholar] [CrossRef]
- Adnan, F.H.; Mahmud, M.F.O.; Abdullah, W.F.H. Hyperbolic tangent activation function integrated circuit implementation for perceptrons. In Proceedings of the 2012 IEEE Student Conference on Research and Development (SCOReD), Pulau Pinang, Malaysia, 5–6 December 2012; pp. 84–87. [Google Scholar] [CrossRef]
- Apicella, A.; Donnarumma, F.; Isgrò, F.; Prevete, R. A survey on modern trainable activation functions. Neural Netw. 2020, 138, 14–32. [Google Scholar] [CrossRef]
- UmaMaheswaran, S.; Ahmad, F.; Hegde, R.; Alwakeel, A.M.; Rameem Zahra, S. Enhanced non-contrast computed tomography images for early acute stroke detection using machine learning approach. Expert Syst. Appl. 2024, 240, 122559. [Google Scholar] [CrossRef]
- Hayat, M.; Gupta, M.; Suanpang, P.; Nanthaamornphong, A. Super-Resolution Methods for Endoscopic Imaging: A Review. In Proceedings of the 2024 12th International Conference on Internet of Everything, Microwave, Embedded, Communication and Networks (IEMECON), Jaipur, India, 24–26 October 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Ding, F.; Shi, Y.; Zhu, G.; Shi, Y. Real-time estimation for the parameters of Gaussian filtering via deep learning. J. Real-Time Image Process. 2020, 17, 17–27. [Google Scholar] [CrossRef]
- Chernov, A. On using Gaussian functions with varied parameters for approximation of functions of one variable on a finite segment. Vestn. Udmurt. Univ. Mat. Mekhanika Komp’yuternye Nauk. 2017, 27, 267–282. [Google Scholar] [CrossRef]
- Lee, D.; In, J.; Lee, S. Standard deviation and standard error of the mean. Korean J. Anesthesiol. 2015, 68, 220–223. [Google Scholar] [CrossRef]
- Villar, S.; Torcida, S.; Acosta, G. Median Filtering: A New Insight. J. Math. Imaging Vis. 2017, 58, 130–146. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar] [CrossRef]
- Britto, A.; Chinnasamy, M. Deep learning with fine tuning strategy for Image segmentation and classification. J. Xidian Univ. 2023, 14, 1107–1113. [Google Scholar] [CrossRef]
- Xie, W.; Willems, N.; Patil, S.; Li, Y.; Kumar, M. SAM Fewshot Finetuning for Anatomical Segmentation in Medical Images. In Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2024; pp. 3241–3249. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 6105–6114. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Proceedings of the 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 20 September 2018; Proceedings 4; Springer: Cham, Switzerland, 2018; pp. 3–11. [Google Scholar]
- Li, H.; Xiong, P.; An, J.; Wang, L. Pyramid attention network for semantic segmentation. arXiv 2018, arXiv:1805.10180. [Google Scholar] [CrossRef]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar]
- Zhao, R.; Qian, B.; Zhang, X.; Li, Y.; Wei, R.; Liu, Y.; Pan, Y. Rethinking Dice Loss for Medical Image Segmentation. In Proceedings of the 2020 IEEE International Conference on Data Mining (ICDM), Sorrento, Italy, 17–20 November 2020; pp. 851–860. [Google Scholar] [CrossRef]
- Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Jorge Cardoso, M. Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Proceedings of the Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, 14 September 2017; Proceedings; Cardoso, M.J., Arbel, T., Carneiro, G., Syeda-Mahmood, T., Tavares, J.M.R., Moradi, M., Bradley, A., Greenspan, H., Papa, J.P., Madabhushi, A., et al., Eds.; Springer: Cham, Switzerland, 2017; pp. 240–248. [Google Scholar]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
- Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Kiware Medical Dicom Anonymizer. 2024. Available online: https://github.com/KitwareMedical/dicom-anonymizer (accessed on 21 June 2025).
- Mumuni, A.; Mumuni, F. Data augmentation: A comprehensive survey of modern approaches. Array 2022, 16, 100258. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Sheng, H.; Cai, S.; Zhao, N.; Deng, B.; Huang, J.; Hua, X.S.; Zhao, M.J.; Lee, G. Rethinking IoU-based Optimization for Single-stage 3D Object Detection. In Computer Vision—ECCV 2022, Proceedings of the 17th European Conference, Tel Aviv, Israel, 23–27 October 2022; Proceedings, Part IX; Springer: Cham, Switzerland, 2022; pp. 544–561. [Google Scholar] [CrossRef]
- Wang, H.; Cong, Y.; Litany, O.; Gao, Y.; Guibas, L. 3DIoUMatch: Leveraging IoU Prediction for Semi-Supervised 3D Object Detection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 14610–14619. [Google Scholar] [CrossRef]
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression. arXiv 2019, arXiv:1902.09630. [Google Scholar] [CrossRef]
- Stodt, J.; Reich, C.; Clarke, N. Unified Intersection Over Union for Explainable Artificial Intelligence. In Intelligent Systems and Applications, Proceedings of the 2023 Intelligent Systems Conference (IntelliSys) Volume 2, Amsterdam, The Netherlands, 7–8 September 2024; Springer: Cham, Switzerland, 2024; pp. 758–770. [Google Scholar] [CrossRef]
- Taha, A.A.; Hanbury, A. Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool. BMC Med. Imaging 2015, 15, 29. [Google Scholar] [CrossRef]
- Müller, D.; Soto-Rey, I.; Kramer, F. Towards a guideline for evaluation metrics in medical image segmentation. BMC Res. Notes 2022, 15, 210. [Google Scholar] [CrossRef]
- Bertels, J.; Eelbode, T.; Berman, M.; Vandermeulen, D.; Maes, F.; Bisschops, R.; Blaschko, M.B. Optimizing the Dice Score and Jaccard Index for Medical Image Segmentation: Theory & Practice. arXiv 2019, arXiv:1911.01685. [Google Scholar] [CrossRef]
- Rainio, O.; Klén, R. Modified Dice Coefficients for Evaluation of Tumor Segmentation from PET Images: A Proof-of-Concept Study. J. Imaging Inform. Med. 2025, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Parmar, J.; Satheesh, S.; Patwary, M.; Shoeybi, M.; Catanzaro, B. Reuse, Don’t Retrain: A Recipe for Continued Pretraining of Language Models. arXiv 2024, arXiv:2407.07263. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar] [CrossRef]
- Tian, Q.; Wang, Z.; Cui, X. Improved Unet brain tumor image segmentation based on GSConv module and ECA attention mechanism. Appl. Comput. Eng. 2024, 88, 214–223. [Google Scholar] [CrossRef]
- Larson, P.B.; Zapletal, J. Discontinuous homomorphisms, selectors and automorphisms of the complex field. arXiv 2020, arXiv:1803.02740. [Google Scholar] [CrossRef]
- Lauande, M.G.M.; Braz Junior, G.; de Almeida, J.D.S.; Silva, A.C.; Gil da Costa, R.M.; Teles, A.M.; da Silva, L.L.; Brito, H.O.; Vidal, F.C.B.; do Vale, J.G.A.; et al. Building a DenseNet-Based Neural Network with Transformer and MBConv Blocks for Penile Cancer Classification. Appl. Sci. 2024, 14, 10536. [Google Scholar] [CrossRef]
- Rai, K.; Hojatpanah, F.; Ajaei, F.; Grolinger, K. Deep Learning for High-Impedance Fault Detection: Convolutional Autoencoders. Energies 2021, 14, 3623. [Google Scholar] [CrossRef]
- Samarin, A.; Toropov, A.; Egorova, O. Self-Attention Based Approach to Iris Segmentation. In Proceedings of the 2025 International Russian Smart Industry Conference (SmartIndustryCon), Sochi, Russia, 24–28 March 2025; pp. 200–205. [Google Scholar] [CrossRef]
- Samarin, A.; Savelev, A.; Toropov, A.; Dzestelova, A.; Malykh, V.; Mikhailova, E.; Motyko, A. Prior Segmentation and Attention Based Approach to Neoplasms Recognition by Single-Channel Monochrome Computer Tomography Snapshots. In In Pattern Recognition, Computer Vision, and Image Processing, Proceedings of the ICPR 2022 International Workshops and Challenges, Montreal, QC, Canada, 21–25 August 2022; Proceedings, Part II; Springer: Berlin/Heidelberg, Germany, 2022; pp. 561–570. [Google Scholar] [CrossRef]
- Rais, K.; Amroune, M.; Benmachiche, A.; Haouam, M.Y. Exploring Variational Autoencoders for Medical Image Generation: A Comprehensive Study. arXiv 2024, arXiv:2411.07348. [Google Scholar] [CrossRef]
- Jha, D.; Riegler, M.A.; Johansen, D.; Halvorsen, P.; Johansen, H.D. DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation. arXiv 2020, arXiv:2006.04868. [Google Scholar] [CrossRef]
- Jose, J.M.; Sindagi, V.; Hacihaliloglu, I.; Patel, V.M. KiU-Net: Towards Accurate Segmentation of Biomedical Images using Over-complete Representations. arXiv 2020, arXiv:2006.04878. [Google Scholar] [CrossRef]
- Gui, H.; Wang, R.; Yin, K.; Jin, L.; Kula, M.; Xu, T.; Hong, L.; Chi, E.H. Hiformer: Heterogeneous Feature Interactions Learning with Transformers for Recommender Systems. arXiv 2023, arXiv:2311.05884. [Google Scholar] [CrossRef]
Rank | Backbone | Depth | Skip Kernels | Params (M) | Dice (P) | Dice (C) | mIoU (P) | mIoU (C) |
---|---|---|---|---|---|---|---|---|
1 | U-Net [79] | 5 | 5 × 5 + 7 × 7 + 11 × 11 | 34.2 | 0.618 | 0.602 | 0.501 | 0.487 |
2 | U-Net | 5 | 3 × 3 + 5 × 5 + 9 × 9 | 34.0 | 0.615 | 0.598 | 0.497 | 0.483 |
3 | ResAE-U-Net [80] | 5 | 5 × 5 + 7 × 7 + 11 × 11 | 36.7 | 0.613 | 0.596 | 0.495 | 0.481 |
4 | U-Net | 4 | 3 × 3 + 5 × 5 + 7 × 7 | 22.9 | 0.611 | 0.594 | 0.493 | 0.479 |
5 | U-Net + SE | 5 | 5 × 5 + 9 × 9 | 35.3 | 0.609 | 0.592 | 0.491 | 0.477 |
6 | DenseAE-U-Net [81] | 5 | 3 × 3 + 7 × 7 + 11 × 11 | 40.0 | 0.607 | 0.590 | 0.489 | 0.475 |
7 | U-Net | 5 | 3 × 3 + 5 × 5 | 33.8 | 0.605 | 0.588 | 0.487 | 0.473 |
8 | U-Net | 4 | 5 × 5 + 7 × 7 | 22.8 | 0.603 | 0.586 | 0.485 | 0.471 |
9 | CAE (sym) [82] | 5 | 5 × 5 + 7 × 7 + 11 × 11 | 28.5 | 0.601 | 0.584 | 0.483 | 0.469 |
10 | U-Net | 3 | 3 × 3 + 5 × 5 + 7 × 7 | 12.6 | 0.599 | 0.582 | 0.481 | 0.467 |
11 | VAE | 5 | 7 × 7 + 11 × 11 | 29.3 | 0.597 | 0.580 | 0.479 | 0.465 |
12 | U-Net + Attn [79,83,84] | 5 | 3 × 3 + 5 × 5 + 9 × 9 | 36.1 | 0.595 | 0.578 | 0.477 | 0.463 |
13 | ResAE (sym) | 5 | 3 × 3 + 7 × 7 | 31.7 | 0.593 | 0.576 | 0.475 | 0.461 |
14 | U-Net | 5 | 7 × 7 only | 33.7 | 0.591 | 0.574 | 0.473 | 0.459 |
15 | DenseAE (sym) | 5 | 5 × 5 + 9 × 9 | 33.8 | 0.589 | 0.572 | 0.471 | 0.457 |
16 | U-Net | 4 | 3 × 3 + 11 × 11 | 22.7 | 0.587 | 0.570 | 0.469 | 0.455 |
17 | CAE (asym) | 5 | 5 × 5 + 7 × 7 | 26.9 | 0.585 | 0.568 | 0.467 | 0.453 |
18 | U-Net + GN | 4 | 3 × 3 + 5 × 5 + 9 × 9 | 22.9 | 0.583 | 0.566 | 0.465 | 0.451 |
19 | VAE + L1 | 5 | 5 × 5 + 7 × 7 | 29.3 | 0.581 | 0.564 | 0.463 | 0.449 |
20 | ResAE + SE | 5 | 3 × 3 + 5 × 5 + 7 × 7 | 32.2 | 0.579 | 0.562 | 0.461 | 0.447 |
21 | U-Net | 5 | 11 × 11 only | 34.1 | 0.577 | 0.560 | 0.459 | 0.445 |
22 | CAE + SpecNorm | 5 | 3 × 3 + 5 × 5 | 28.5 | 0.575 | 0.558 | 0.457 | 0.443 |
23 | U-Net + Drop | 5 | 5 × 5 + 9 × 9 | 34.4 | 0.573 | 0.556 | 0.455 | 0.441 |
24 | DenseAE + Drop | 5 | 3 × 3 + 7 × 7 | 33.8 | 0.571 | 0.554 | 0.453 | 0.439 |
25 | ResAE (shallow) | 4 | 5 × 5 + 7 × 7 | 21.5 | 0.569 | 0.552 | 0.451 | 0.437 |
26 | CAE (wide) | 4 | 3 × 3 + 5 × 5 + 7 × 7 | 26.1 | 0.567 | 0.550 | 0.449 | 0.435 |
27 | VAE + = 0.5 [85] | 5 | 3 × 3 + 5 × 5 | 29.3 | 0.565 | 0.548 | 0.447 | 0.433 |
28 | U-Net + L2 | 5 | 7 × 7 + 9 × 9 | 34.4 | 0.563 | 0.546 | 0.445 | 0.431 |
29 | CAE + ELU | 5 | 5 × 5 only | 28.5 | 0.561 | 0.544 | 0.443 | 0.429 |
30 | ResAE + ELU | 5 | 3 × 3 + 5 × 5 | 31.7 | 0.559 | 0.542 | 0.441 | 0.427 |
Rank | Type | Model | Dice 2D (P/C) | mIoU 2D (P/C) | Dice 3D (P/C) | mIoU 3D (P/C) |
---|---|---|---|---|---|---|
1 | CP | UNet++ | 0.624/0.608 | 0.507/0.493 | 0.608/0.592 | 0.491/0.477 |
2 | CP | SwinUNet | 0.623/0.607 | 0.506/0.492 | 0.607/0.591 | 0.490/0.476 |
3 | CP | DoubleUNet | 0.619/0.603 | 0.502/0.488 | 0.605/0.589 | 0.488/0.474 |
4 | CP | TransUNet | 0.618/0.602 | 0.501/0.487 | 0.604/0.588 | 0.487/0.473 |
5 | PP | nnUNet | 0.620/0.604 | 0.503/0.489 | 0.603/0.587 | 0.486/0.472 |
6 | CP | AttnUNet | 0.616/0.600 | 0.499/0.485 | 0.602/0.586 | 0.485/0.471 |
7 | CP | ResUNet++ | 0.615/0.599 | 0.498/0.484 | 0.601/0.585 | 0.484/0.470 |
8 | CP | MultiResUNet | 0.614/0.598 | 0.497/0.483 | 0.600/0.584 | 0.483/0.469 |
9 | PP | UNet3+ | 0.615/0.599 | 0.498/0.484 | 0.599/0.583 | 0.482/0.468 |
10 | CP | KiU-Net | 0.613/0.597 | 0.496/0.482 | 0.598/0.582 | 0.481/0.467 |
11 | CP | FocalUNet | 0.612/0.596 | 0.495/0.481 | 0.597/0.581 | 0.480/0.466 |
12 | PP | CE-Net | 0.612/0.596 | 0.495/0.481 | 0.596/0.580 | 0.479/0.465 |
13 | CP | DenseUNet | 0.611/0.595 | 0.494/0.480 | 0.596/0.580 | 0.479/0.465 |
14 | CP | Inf-Net | 0.610/0.594 | 0.493/0.479 | 0.595/0.579 | 0.478/0.464 |
15 | CP | PraNet | 0.609/0.593 | 0.492/0.478 | 0.594/0.578 | 0.477/0.463 |
16 | Base | UCTransNet | 0.605/0.589 | 0.488/0.474 | 0.590/0.574 | 0.473/0.459 |
17 | PP | MedT | 0.606/0.590 | 0.489/0.475 | 0.591/0.575 | 0.474/0.460 |
18 | Base | MISSU | 0.603/0.587 | 0.486/0.472 | 0.588/0.572 | 0.471/0.457 |
19 | PP | CrossFormer | 0.604/0.588 | 0.487/0.473 | 0.589/0.573 | 0.472/0.458 |
20 | Base | BTSNet | 0.601/0.585 | 0.484/0.470 | 0.586/0.570 | 0.469/0.455 |
21 | CP | CS2-Net | 0.608/0.592 | 0.491/0.477 | 0.593/0.577 | 0.476/0.462 |
22 | CP | HiFormer | 0.607/0.591 | 0.490/0.476 | 0.592/0.576 | 0.475/0.461 |
23 | PP | FAT-Net | 0.605/0.589 | 0.488/0.474 | 0.590/0.574 | 0.473/0.459 |
24 | Base | DC-UNet | 0.602/0.586 | 0.485/0.471 | 0.587/0.571 | 0.470/0.456 |
25 | CP | ColonSegNet | 0.604/0.588 | 0.487/0.473 | 0.589/0.573 | 0.472/0.458 |
26 | PP | TransClaw U-Net | 0.603/0.587 | 0.486/0.472 | 0.588/0.572 | 0.471/0.457 |
27 | Base | MHSA-UNet | 0.600/0.584 | 0.483/0.469 | 0.585/0.569 | 0.468/0.454 |
28 | CP | D2A-UNet | 0.601/0.585 | 0.484/0.470 | 0.586/0.570 | 0.469/0.455 |
29 | PP | SCUNet | 0.599/0.583 | 0.482/0.468 | 0.584/0.568 | 0.467/0.453 |
30 | Base | FSS-UNet | 0.597/0.581 | 0.480/0.466 | 0.582/0.566 | 0.465/0.451 |
31 | CP | 3D-UNet | 0.598/0.582 | 0.481/0.467 | 0.583/0.567 | 0.466/0.452 |
32 | PP | UNeXt | 0.596/0.580 | 0.479/0.465 | 0.581/0.565 | 0.464/0.450 |
33 | Base | PVT-UNet | 0.594/0.578 | 0.477/0.463 | 0.579/0.563 | 0.462/0.448 |
34 | CP | CoBiNet | 0.595/0.579 | 0.478/0.464 | 0.580/0.564 | 0.463/0.449 |
35 | PP | Edge-UNet | 0.593/0.577 | 0.476/0.462 | 0.578/0.562 | 0.461/0.447 |
36 | Base | HFA-UNet | 0.591/0.575 | 0.474/0.460 | 0.576/0.560 | 0.459/0.445 |
37 | CP | TDB-UNet | 0.592/0.576 | 0.475/0.461 | 0.577/0.561 | 0.460/0.446 |
38 | PP | MCGU-Net | 0.590/0.574 | 0.473/0.459 | 0.575/0.559 | 0.458/0.444 |
39 | Base | SCPM-Net | 0.588/0.572 | 0.471/0.457 | 0.573/0.557 | 0.456/0.442 |
40 | CP | DPR-UNet | 0.589/0.573 | 0.472/0.458 | 0.574/0.558 | 0.457/0.443 |
41 | PP | CFM-UNet | 0.587/0.571 | 0.470/0.456 | 0.572/0.556 | 0.455/0.441 |
42 | Base | LGANet | 0.585/0.569 | 0.468/0.454 | 0.570/0.554 | 0.453/0.439 |
43 | CP | VM-UNet | 0.586/0.570 | 0.469/0.455 | 0.571/0.555 | 0.454/0.440 |
44 | PP | DCR-UNet | 0.584/0.568 | 0.467/0.453 | 0.569/0.553 | 0.452/0.438 |
45 | Base | GSU-Net | 0.582/0.566 | 0.465/0.451 | 0.567/0.551 | 0.450/0.436 |
46 | CP | FRT-UNet | 0.583/0.567 | 0.466/0.452 | 0.568/0.552 | 0.451/0.437 |
47 | PP | ADA-UNet | 0.581/0.565 | 0.464/0.450 | 0.566/0.550 | 0.449/0.435 |
48 | Base | PCAC-UNet | 0.579/0.563 | 0.462/0.448 | 0.564/0.548 | 0.447/0.433 |
49 | CP | Sparse-UNet | 0.580/0.564 | 0.463/0.449 | 0.565/0.549 | 0.448/0.434 |
50 | PP | DSR-UNet | 0.578/0.562 | 0.461/0.447 | 0.563/0.547 | 0.446/0.432 |
Rank | Type | Model | Dice 2D (P/C) | mIoU 2D (P/C) | Dice 3D (P/C) | mIoU 3D (P/C) |
---|---|---|---|---|---|---|
1 | CP | SwinUNet | 0.712/0.698 | 0.602/0.588 | 0.695/0.681 | 0.585/0.571 |
2 | CP | DoubleUNet | 0.710/0.696 | 0.600/0.586 | 0.693/0.679 | 0.583/0.569 |
3 | PP | nnUNet | 0.709/0.695 | 0.599/0.585 | 0.692/0.678 | 0.582/0.568 |
4 | CP | UNet++ | 0.708/0.694 | 0.598/0.584 | 0.691/0.677 | 0.581/0.567 |
5 | CP | TransUNet | 0.707/0.693 | 0.597/0.583 | 0.690/0.676 | 0.580/0.566 |
6 | PP | UNet3+ | 0.706/0.692 | 0.596/0.582 | 0.689/0.675 | 0.579/0.565 |
7 | CP | AttnUNet | 0.705/0.691 | 0.595/0.581 | 0.688/0.674 | 0.578/0.564 |
8 | CP | ResUNet++ | 0.704/0.690 | 0.594/0.580 | 0.687/0.673 | 0.577/0.563 |
9 | PP | CE-Net | 0.703/0.689 | 0.593/0.579 | 0.686/0.672 | 0.576/0.562 |
10 | CP | MultiResUNet | 0.702/0.688 | 0.592/0.578 | 0.685/0.671 | 0.575/0.561 |
11 | CP | KiU-Net | 0.701/0.687 | 0.591/0.577 | 0.684/0.670 | 0.574/0.560 |
12 | PP | MedT | 0.700/0.686 | 0.590/0.576 | 0.683/0.669 | 0.573/0.559 |
13 | CP | FocalUNet | 0.699/0.685 | 0.589/0.575 | 0.682/0.668 | 0.572/0.558 |
14 | CP | DenseUNet | 0.698/0.684 | 0.588/0.574 | 0.681/0.667 | 0.571/0.557 |
15 | PP | CrossFormer | 0.697/0.683 | 0.587/0.573 | 0.680/0.666 | 0.570/0.556 |
16 | CP | Inf-Net | 0.696/0.682 | 0.586/0.572 | 0.679/0.665 | 0.569/0.555 |
17 | Base | UCTransNet | 0.692/0.678 | 0.582/0.568 | 0.675/0.661 | 0.565/0.551 |
18 | CP | PraNet | 0.695/0.681 | 0.585/0.571 | 0.678/0.664 | 0.568/0.554 |
19 | PP | FAT-Net | 0.694/0.680 | 0.584/0.570 | 0.677/0.663 | 0.567/0.553 |
20 | Base | BTSNet | 0.690/0.676 | 0.580/0.566 | 0.673/0.659 | 0.563/0.549 |
21 | CP | CS2-Net | 0.693/0.679 | 0.583/0.569 | 0.676/0.662 | 0.566/0.552 |
22 | Base | MISSU | 0.689/0.675 | 0.579/0.565 | 0.672/0.658 | 0.562/0.548 |
23 | CP | HiFormer | 0.692/0.678 | 0.582/0.568 | 0.675/0.661 | 0.565/0.551 |
24 | PP | TransClaw U-Net | 0.691/0.677 | 0.581/0.567 | 0.674/0.660 | 0.564/0.550 |
25 | Base | DC-UNet | 0.688/0.674 | 0.578/0.564 | 0.671/0.657 | 0.561/0.547 |
26 | CP | ColonSegNet | 0.690/0.676 | 0.580/0.566 | 0.673/0.659 | 0.563/0.549 |
27 | PP | SCUNet | 0.687/0.673 | 0.577/0.563 | 0.670/0.656 | 0.560/0.546 |
28 | Base | MHSA-UNet | 0.686/0.672 | 0.576/0.562 | 0.669/0.655 | 0.559/0.545 |
29 | CP | D2A-UNet | 0.689/0.675 | 0.579/0.565 | 0.672/0.658 | 0.562/0.548 |
30 | Base | FSS-UNet | 0.685/0.671 | 0.575/0.561 | 0.668/0.654 | 0.558/0.544 |
31 | PP | UNeXt | 0.684/0.670 | 0.574/0.560 | 0.667/0.653 | 0.557/0.543 |
32 | CP | 3D-UNet | 0.688/0.674 | 0.578/0.564 | 0.671/0.657 | 0.561/0.547 |
33 | Base | PVT-UNet | 0.683/0.669 | 0.573/0.559 | 0.666/0.652 | 0.556/0.542 |
34 | PP | Edge-UNet | 0.682/0.668 | 0.572/0.558 | 0.665/0.651 | 0.555/0.541 |
35 | CP | CoBiNet | 0.687/0.673 | 0.577/0.563 | 0.670/0.656 | 0.560/0.546 |
36 | Base | HFA-UNet | 0.681/0.667 | 0.571/0.557 | 0.664/0.650 | 0.554/0.540 |
37 | PP | MCGU-Net | 0.680/0.666 | 0.570/0.556 | 0.663/0.649 | 0.553/0.539 |
38 | CP | TDB-UNet | 0.686/0.672 | 0.576/0.562 | 0.669/0.655 | 0.559/0.545 |
39 | Base | SCPM-Net | 0.679/0.665 | 0.569/0.555 | 0.662/0.648 | 0.552/0.538 |
40 | PP | CFM-UNet | 0.678/0.664 | 0.568/0.554 | 0.661/0.647 | 0.551/0.537 |
41 | CP | DPR-UNet | 0.685/0.671 | 0.575/0.561 | 0.668/0.654 | 0.558/0.544 |
42 | Base | LGANet | 0.677/0.663 | 0.567/0.553 | 0.660/0.646 | 0.550/0.536 |
43 | PP | DCR-UNet | 0.676/0.662 | 0.566/0.552 | 0.659/0.645 | 0.549/0.535 |
44 | CP | VM-UNet | 0.684/0.670 | 0.574/0.560 | 0.667/0.653 | 0.557/0.543 |
45 | Base | GSU-Net | 0.675/0.661 | 0.565/0.551 | 0.658/0.644 | 0.548/0.534 |
46 | PP | ADA-UNet | 0.674/0.660 | 0.564/0.550 | 0.657/0.643 | 0.547/0.533 |
47 | CP | FRT-UNet | 0.683/0.669 | 0.573/0.559 | 0.666/0.652 | 0.556/0.542 |
48 | Base | PCAC-UNet | 0.673/0.659 | 0.563/0.549 | 0.656/0.642 | 0.546/0.532 |
49 | PP | DSR-UNet | 0.672/0.658 | 0.562/0.548 | 0.655/0.641 | 0.545/0.531 |
50 | CP | Sparse-UNet | 0.682/0.668 | 0.572/0.558 | 0.665/0.651 | 0.555/0.541 |
Preprocessing Combination | Dice 2D (P) | Dice 2D (C) | Dice 3D (P) | Dice 3D (C) | mIoU 2D (P) | mIoU 2D (C) | mIoU 3D (P) | mIoU 3D (C) |
---|---|---|---|---|---|---|---|---|
CB 5 × 5 + 7 × 7 + 11 × 11 + brightness + sharpen + contrast | 0.684 | 0.670 | 0.667 | 0.653 | 0.574 | 0.560 | 0.557 | 0.543 |
CB 5 × 5 + 7 × 7 + 11 × 11 + brightness + contrast + laplacian | 0.681 | 0.667 | 0.664 | 0.650 | 0.571 | 0.557 | 0.554 | 0.540 |
CB 5 × 5 + 7 × 7 + 11 × 11 + sharpen + contrast + exposure | 0.682 | 0.668 | 0.665 | 0.651 | 0.572 | 0.558 | 0.555 | 0.541 |
CB 5 × 5 + 7 × 7 + 11 × 11 + brightness + sharpen + trainable kernel | 0.683 | 0.669 | 0.666 | 0.652 | 0.573 | 0.559 | 0.556 | 0.542 |
CB 5 × 5 + 7 × 7 + 11 × 11 + linear + sharpen + contrast | 0.680 | 0.666 | 0.663 | 0.649 | 0.570 | 0.556 | 0.553 | 0.539 |
brightness + contrast + sharpen + laplacian | 0.675 | 0.661 | 0.658 | 0.644 | 0.565 | 0.551 | 0.548 | 0.534 |
blur + sharpen + linear + exposure | 0.672 | 0.658 | 0.655 | 0.641 | 0.562 | 0.548 | 0.545 | 0.531 |
laplacian + sharpen + linear + trainable kernel | 0.673 | 0.659 | 0.656 | 0.642 | 0.563 | 0.549 | 0.546 | 0.532 |
gaussian blur + median filter + sharpen | 0.670 | 0.656 | 0.653 | 0.639 | 0.560 | 0.546 | 0.543 | 0.529 |
bilateral filter + contrast enhancement | 0.668 | 0.654 | 0.651 | 0.637 | 0.558 | 0.544 | 0.541 | 0.527 |
universal kernel + linear | 0.669 | 0.655 | 0.652 | 0.638 | 0.559 | 0.545 | 0.542 | 0.528 |
brightness + linear | 0.667 | 0.653 | 0.650 | 0.636 | 0.557 | 0.543 | 0.540 | 0.526 |
Preprocessing Combination | Dice 2D (P) | Dice 2D (C) | Dice 3D (P) | Dice 3D (C) | mIoU 2D (P) | mIoU 2D (C) | mIoU 3D (P) | mIoU 3D (C) |
---|---|---|---|---|---|---|---|---|
CB 5 × 5 + 7 × 7 + 11 × 11 + brightness + sharpen + contrast | 0.682 | 0.668 | 0.665 | 0.651 | 0.572 | 0.558 | 0.555 | 0.541 |
CB 5 × 5 + 7 × 7 + 11 × 11 + brightness + contrast + laplacian | 0.679 | 0.665 | 0.662 | 0.648 | 0.569 | 0.555 | 0.552 | 0.538 |
CB 5 × 5 + 7 × 7 + 11 × 11 + sharpen + contrast + exposure | 0.680 | 0.666 | 0.663 | 0.649 | 0.570 | 0.556 | 0.553 | 0.539 |
CB 5 × 5 + 7 × 7 + 11 × 11 + brightness + sharpen + trainable kernel | 0.681 | 0.667 | 0.664 | 0.650 | 0.571 | 0.557 | 0.554 | 0.540 |
CB 5 × 5 + 7 × 7 + 11 × 11 + linear + sharpen + contrast | 0.678 | 0.664 | 0.661 | 0.647 | 0.568 | 0.554 | 0.551 | 0.537 |
brightness + contrast + sharpen + laplacian | 0.673 | 0.659 | 0.656 | 0.642 | 0.563 | 0.549 | 0.546 | 0.532 |
blur + sharpen + linear + exposure | 0.670 | 0.656 | 0.653 | 0.639 | 0.560 | 0.546 | 0.543 | 0.529 |
laplacian + sharpen + linear + trainable kernel | 0.671 | 0.657 | 0.654 | 0.640 | 0.561 | 0.547 | 0.544 | 0.530 |
gaussian blur + median filter + sharpen | 0.668 | 0.654 | 0.651 | 0.637 | 0.558 | 0.544 | 0.541 | 0.527 |
brightness + contrast enhancement | 0.666 | 0.652 | 0.649 | 0.635 | 0.556 | 0.542 | 0.539 | 0.525 |
contrast + linear | 0.667 | 0.653 | 0.650 | 0.636 | 0.557 | 0.543 | 0.540 | 0.526 |
universal kernel + linear | 0.665 | 0.651 | 0.648 | 0.634 | 0.555 | 0.541 | 0.538 | 0.524 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Samarin, A.; Savelev, A.; Toropov, A.; Dozortseva, A.; Kotenko, E.; Nazarenko, A.; Motyko, A.; Narova, G.; Mikhailova, E.; Malykh, V. Non-Contrast Brain CT Images Segmentation Enhancement: Lightweight Pre-Processing Model for Ultra-Early Ischemic Lesion Recognition and Segmentation. J. Imaging 2025, 11, 359. https://doi.org/10.3390/jimaging11100359
Samarin A, Savelev A, Toropov A, Dozortseva A, Kotenko E, Nazarenko A, Motyko A, Narova G, Mikhailova E, Malykh V. Non-Contrast Brain CT Images Segmentation Enhancement: Lightweight Pre-Processing Model for Ultra-Early Ischemic Lesion Recognition and Segmentation. Journal of Imaging. 2025; 11(10):359. https://doi.org/10.3390/jimaging11100359
Chicago/Turabian StyleSamarin, Aleksei, Alexander Savelev, Aleksei Toropov, Aleksandra Dozortseva, Egor Kotenko, Artem Nazarenko, Alexander Motyko, Galiya Narova, Elena Mikhailova, and Valentin Malykh. 2025. "Non-Contrast Brain CT Images Segmentation Enhancement: Lightweight Pre-Processing Model for Ultra-Early Ischemic Lesion Recognition and Segmentation" Journal of Imaging 11, no. 10: 359. https://doi.org/10.3390/jimaging11100359
APA StyleSamarin, A., Savelev, A., Toropov, A., Dozortseva, A., Kotenko, E., Nazarenko, A., Motyko, A., Narova, G., Mikhailova, E., & Malykh, V. (2025). Non-Contrast Brain CT Images Segmentation Enhancement: Lightweight Pre-Processing Model for Ultra-Early Ischemic Lesion Recognition and Segmentation. Journal of Imaging, 11(10), 359. https://doi.org/10.3390/jimaging11100359