LcmUNet: A Lightweight Network Combining CNN and MLP for Real-Time Medical Image Segmentation
Abstract
:1. Introduction
- Large number of parameters and high computational cost: Medical image data typically require more computing resources and storage capacity to process and store the required data due to their high resolution, multiple channels, and complex structural features. Deep learning-based medical image segmentation networks require a significant amount of training and optimization, leading to high computational costs and a large number of parameters.
- Insufficient accuracy of lightweight segmentation models: Lightweight image segmentation models are essential for some low-power device applications. However, currently, many lightweight models perform worse than other advanced models in segmentation accuracy, especially regarding small or indistinguishable areas that are trapped with errors or omission.
- Insufficient extraction of global and local information makes distinguishing between the boundaries of the segmentation area and background challenging: Organs and lesions in medical images often have complex shapes and structures, requiring the full extraction and coordination of global and local information during segmentation. As deep learning models typically only focus on small receptive fields, it is difficult to tell apart the boundaries between the segmentation area and background, leading in the inaccurate results.
- We propose a novel lightweight neural network, called LcmUNet, that significantly improves the accuracy of medical image segmentation tasks while maintaining a high inference speed.
- To address the issue of large parameter size in traditional models, we design the LDA module, which utilizes depth-wise separable convolution, asymmetric convolution, and an attention mechanism to balance the inference speed and segmentation accuracy. Introducing the LDA module in the convolution stage reduces the number of network parameters while enhancing the feature extraction capabilities.
- Additionally, we propose an MLP module called LMLP, which fuses context information and operates in different directions to further enhance information expression and improve segmentation accuracy.
- Finally, we test and demonstrate the performance of LcmUNet on three medical image segmentation datasets: ISIC2018, BUSI, and Kvasir-SEG. Specifically, with only 1.49M parameters, we obtained segmentation accuracies of 85.19%, 63.99%, and 81.89% on the three datasets using a NVIDIA 3060 GPU, further highlighting the superiority of LcmUNet in the field of medical image segmentation.
2. Related Works
2.1. UNet
2.2. Lightweight Models
2.3. MLP
3. LcmUNet
3.1. LDA Module
3.2. LMLP Module
3.3. LcmUNet Architecture
4. Experiment
4.1. Implement Details
4.2. Comparative Experiments
4.3. Model Complexity Analysis
4.4. Model Performance Comparison Analysis
4.5. Ablation Experiments
4.5.1. Overall Validation
4.5.2. Number of Channels
4.5.3. Number of Feature Fusions
5. Discussion
5.1. A Novel Deep Learning Framework for Real-Time Medical Image Segmentation
5.2. Limitations
5.3. Future Work
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Liu, X.B.; Song, L.P.; Liu, S.; Zhang, Y.D. A Review of Deep-Learning-Based Medical Image Segmentation Methods. Sustainability 2021, 13, 1224. [Google Scholar] [CrossRef]
- Zhao, C.; Vij, A.; Malhotra, S.; Tang, J.S.; Tang, H.P.; Pienta, D.; Xu, Z.H.; Zhou, W.H. Automatic extraction and stenosis evaluation of coronary arteries in invasive coronary angiograms. Comput. Biol. Med. 2021, 136, 104667. [Google Scholar] [CrossRef] [PubMed]
- Tian, Z.Q.; Liu, L.Z.; Zhang, Z.F.; Fei, B.W. Superpixel-Based Segmentation for 3D Prostate MR Images. IEEE Trans. Med. Imaging 2016, 35, 791–801. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nguyen, D.C.T.; Benameur, S.; Mignotte, M.; Lavoie, F. Superpixel and multi-atlas based fusion entropic model for the segmentation of X-ray images. Med. Image Anal. 2018, 48, 58–74. [Google Scholar] [CrossRef]
- Huang, Y.L.; Chen, D.R. Watershed segmentation for breast tumor in 2-D sonography. Biology 2004, 30, 625–632. [Google Scholar] [CrossRef]
- Masoumi, H.; Behrad, A.; Pourmina, M.A.; Roosta, A. Automatic liver segmentation in MRI images using an iterative watershed algorithm and artificial neural network. Biomed. Signal Process. Control. 2012, 7, 429–437. [Google Scholar] [CrossRef]
- Ciecholewski, M.; Spodnik, J.H. Semi-Automatic Corpus Callosum Segmentation and 3D Visualization Using Active Contour Methods. Symmetry 2018, 10, 589. [Google Scholar] [CrossRef] [Green Version]
- Zhao, Y.T.; Rada, L.; Chen, K.; Harding, S.P.; Zheng, Y.L. Automated Vessel Segmentation Using Infinite Perimeter Active Contour Model with Hybrid Region Information with Application to Retinal Images. IEEE Trans. Med. Imaging 2015, 34, 1797–1807. [Google Scholar] [CrossRef] [Green Version]
- Tang, Z.X.; Duan, J.T.; Sun, Y.M.; Zeng, Y.A.; Zhang, Y.L.; Yao, X.F. A combined deformable model and medical transformer algorithm for medical image segmentation. Med. Biol. Eng. Comput. 2023, 61, 129–137. [Google Scholar] [CrossRef]
- Benazzouz, M.; Benomar, M.L.; Moualek, Y. Modified U-Net for cytological medical image segmentation. Int. J. Imaging Syst. Technol. 2022, 32, 1761–1773. [Google Scholar] [CrossRef]
- Qiu, X.J. A New Multilevel Feature Fusion Network for Medical Image Segmentation. Sens. Imaging 2021, 22, 1–20. [Google Scholar] [CrossRef]
- Xia, H.Y.; Ma, M.J.; Li, H.S.; Song, S.X. MC-Net: Multi-scale context-attention network for medical CT image segmentation. Appl. Intell. 2022, 52, 1508–1519. [Google Scholar] [CrossRef]
- Ma, M.J.; Xia, H.Y.; Tan, Y.M.; Li, H.S.; Song, S.X. HT-Net: Hierarchical context-attention transformer network for medical ct image segmentation. Appl. Intell. 2022, 52, 10692–10705. [Google Scholar] [CrossRef]
- He, J.; Zhu, Q.; Zhang, K.; Yu, P.; Tang, J. An evolvable adversarial network with gradient penalty for COVID-19 infection segmentation. Appl. Soft Comput. 2021, 113, 107947. [Google Scholar] [CrossRef]
- Chen, Q.Y.; Zhao, Y.; Liu, Y.; Sun, Y.Q.; Yang, C.S.; Li, P.C.; Zhang, L.M.; Gao, C.Q. MSLPNet: Multi-scale location perception network for dental panoramic X-ray image segmentation. Neural Comput. Appl. 2021, 33, 10277–10291. [Google Scholar] [CrossRef]
- Shi, W.B.; Xu, T.S.; Yang, H.; Xi, Y.M.; Du, Y.K.; Li, J.H.; Li, J.X. Attention Gate Based Dual-Pathway Network for Vertebra Segmentation of X-Ray Spine Images. IEEE J. Biomed. Health Inform. 2022, 26, 3976–3987. [Google Scholar] [CrossRef]
- Fang, L.L.; Wang, X.; Lian, Z.Y.; Yao, Y.B.; Zhang, Y.C. Supervoxel-based brain tumor segmentation with multimodal MRI images. Signal Image Video Process. 2022, 16, 1215–1223. [Google Scholar] [CrossRef]
- Fu, Z.Y.; Zhang, J.; Luo, R.Y.; Sun, Y.T.; Deng, D.D.; Xia, L. TF-Unet:An automatic cardiac MRI image segmentation method. Math. Biosci. Eng. 2022, 19, 5207–5222. [Google Scholar] [CrossRef]
- Huang, Z.H.; Zhang, X.C.; Song, Y.H.; Cai, G.R. FECC-Net: A Novel Feature Enhancement and Context Capture Network Based on Brain MRI Images for Lesion Segmentation. Brain Sci. 2022, 12, 765. [Google Scholar] [CrossRef]
- Liu, X.M.; Zhang, D.; Yao, J.P.; Tang, J.S. Transformer and convolutional based dual branch network for retinal vessel segmentation in OCTA images. Biomed. Signal Process. Control. 2023, 83, 104604. [Google Scholar] [CrossRef]
- Lopez-Varela, E.; de Moura, J.; Novo, J.; Fernandez-Vigo, J.I.; Moreno-Morillo, F.J.; Ortega, M. Fully automatic segmentation and monitoring of choriocapillaris flow voids in OCTA images. Comput. Med. Imaging Graph. 2023, 104, 102172. [Google Scholar] [CrossRef] [PubMed]
- Lin, Y.P.; Huang, J.H.; Xu, W.J.; Cui, C.C.; Xu, W.Z.; Li, Z.J. Method for carotid artery 3-D ultrasound image segmentation based on cswin transformer. Ultrasound Med. Biol. 2023, 49, 645–656. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Q.; Wang, Q.W.; Bao, Y.C.; Kong, L.J.; Jin, X.; Ou, W.H. LAEDNet: A Lightweight Attention Encoder-Decoder Network for ultrasound medical image segmentation. Comput. Electr. Eng. 2022, 99, 107777. [Google Scholar] [CrossRef]
- Qian, L.; Huang, H.; Xia, X.; Li, Y.; Zhou, X. Automatic segmentation method using FCN with multi-scale dilated convolution for medical ultrasound image. Vis. Comput. 2022, 1–17. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.J. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Xiao, X.; Lian, S.; Luo, Z.; Li, S. Weighted res-unet for high-quality retina vessel segmentation. In Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018; pp. 327–331. [Google Scholar]
- Ni, Z.-L.; Bian, G.-B.; Zhou, X.-H.; Hou, Z.-G.; Xie, X.-L.; Wang, C.; Zhou, Y.-J.; Li, R.-Q.; Li, Z. RAUNet: Residual Attention U-Net for Semantic Segmentation of Cataract Surgical Instruments. In Proceedings of the International Conference on Neural Information Processing, Sydney, NSW, Australia, 12–15 December 2019; pp. 139–149. [Google Scholar]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Trans. Med. Imaging 2020, 39, 1856–1867. [Google Scholar] [CrossRef] [Green Version]
- Gadosey, P.K.; Li, Y.; Agyekum, E.A.; Zhang, T.; Liu, Z.; Yamak, P.T.; Essaf, F. SD-UNet: Stripping down U-Net for Segmentation of Biomedical Images on Platforms with Low Computational Budgets. Diagnostics 2020, 10, 110. [Google Scholar] [CrossRef] [Green Version]
- Lou, A.; Guan, S.; Loew, M. DC-UNet: Rethinking the U-Net architecture with dual channel efficient CNN for medical image segmentation. In Medical Imaging 2021: Image Processing; SPIE: Washington, DC, USA, 2021; pp. 758–768. [Google Scholar]
- Valanarasu, J.M.J.; Patel, V.M. Unext: Mlp-based rapid medical image segmentation network. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, 18–22 September 2022; pp. 23–33. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
- Chen, C.; Guo, Z.; Zeng, H.; Xiong, P.; Dong, J. RepGhost: A Hardware-Efficient Ghost Module via Re-Parameterization. arXiv 2022, arXiv:2211.06088. [Google Scholar]
- Tan, M.; Chen, B.; Pang, R.; Vasudevan, V.; Sandler, M.; Howard, A.; Le, Q.V. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2820–2828. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 1, 5999–6009. [Google Scholar]
- Ding, X.; Xia, C.; Zhang, X.; Chu, X.; Han, J.; Ding, G. Repmlp: Re-parameterizing convolutions into fully-connected layers for image recognition. arXiv 2021, arXiv:2105.01883. [Google Scholar]
- Touvron, H.; Bojanowski, P.; Caron, M.; Cord, M.; El-Nouby, A.; Grave, E.; Izacard, G.; Joulin, A.; Synnaeve, G.; Verbeek, J.; et al. Resmlp: Feedforward networks for image classification with data-efficient training. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5314–5321. [Google Scholar] [CrossRef] [PubMed]
- Guo, M.-H.; Liu, Z.-N.; Mu, T.-J.; Hu, S.-M. Beyond self-attention: External attention using two linear layers for visual tasks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5436–5447. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.; Xie, E.; Ge, C.; Chen, R.; Liang, D.; Luo, P. Cyclemlp: A mlp-like architecture for dense prediction. arXiv 2021, arXiv:2107.10224. [Google Scholar]
- Li, J.; Hassani, A.; Walton, S.; Shi, H. Convmlp: Hierarchical convolutional mlps for vision. arXiv 2021, arXiv:2109.04454. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
Networks | Advantages | Disadvantages |
---|---|---|
Unet | Data enhancement with elastic deformation; large number of feature channels in upsampling | Excessive downsampling leads to more loss of spatial information |
AttUNet | Introduces an attention mechanism that allows the network to adaptively focus on important regions of the image | Requires multiple layers of overlay to extract long-range information; relatively low efficiency |
ResUNet | Adds weighted attention mechanism, enabling the model to learn more knowledge to distinguish features of target and background pixels | Large model parameters and high computational costs |
RAUNet | Effectively fuses multi-level features and improves feature representation; introduces hybrid loss function to solve the class imbalance problem | Applicability may be limited by task specificity |
UNet++ | Enhances context-awareness by using multi-scale features; ability to retain detailed information by cascading multiscale features | Fails to express enough information in multiple scales; consumes too much memory and is difficult to train |
SDUNet | Optimized for platforms with limited computational resources to reduce computational complexity and memory consumption | To accommodate platforms with low computing budgets, some performance may be lost |
DCUNet | Efficient convolutional neural network with two channels is introduced, while exploiting multiple scales and contextual information | Computational complexity will increase, requiring more computational resources and time for training |
UNext | Faster inference with fewer parameters and computational complexity | Weak feature-extraction ability |
Network | Advantages | Disadvantages |
---|---|---|
MobileNet | Introduces deeply separable convolution for the lightweight design of network structures | Single network structure and overuse of activation functions lead to deactivation of neurons |
MobileNetV2 | Introduces the inverse residual module | Due to the small convolution kernel in the depth-separable convolution, it can easily be 0 after activation |
ShuffleNet | Proposes the operation of channel shuffling to reduce the amount of computation and number of parameters by rearranging the input channels | Different numbers of input and output features, excessive use of group convolution, network fragmentation, too many element-level operations |
ShuffleNet V2 | Introduces channel separation, equal number of input and output features, and equal number of channels in the base unit | Running speed and storage space need to be further improved |
Xception | Introduces larger convolution kernels and multi-scale feature extraction | Needs to be tuned for the task |
GhostNet | Generates more features by using fewer parameters | Models are limited in their ability to represent features on complex tasks |
RepGhost | Efficient implementation in hardware through reparameterization | Limited applicability on other hardware platforms |
Networks | Advantages | Disadvantages |
---|---|---|
RepMLP | Reparametrizes convolution operations into fully connected layers for efficient matrix operations | More sensitive to the size of the input image |
ResMLP | Data efficient training strategy is adopted; translation isotropy is introduced; the idea of residual linkage is adopted | The training data requirements are relatively high |
EAMLP | Uses two linear layers instead of the self-attentive mechanism to improve computational efficiency | External attention mechanisms limit the ability of the model to interact with global information between different locations |
CycleMLP | Uses basic multi-layer perceptron structure and loop mechanism; performs intensive prediction tasks with limited computational resources | Limited in the ability to extract high-level features |
ConvMLP | Layered architecture that enables multi-level feature extraction and representation learning of image data | Relatively weak modeling of spatial locality and translational invariance |
Stage | Layer | Type | Channel (In) | Channel (Out) |
---|---|---|---|---|
1 | LDA-A | 3 | 16 | |
2 | LDA-B | 16 | 32 | |
Encoder | 3 | LDA-B | 32 | 128 |
4 | LMLP | 128 | 160 | |
5 | LMLP | 160 | 256 | |
6 | LMLP | 256 | 160 | |
7 | Bilinear Up (×2) | 160 | 160 | |
8 | LMLP | 160 | 128 | |
9 | Bilinear Up (×2) | 128 | 128 | |
10 | LDA-B | 128 | 32 | |
Decoder | 11 | Bilinear Up (×2) | 32 | 32 |
12 | LDA-B | 32 | 16 | |
13 | Bilinear Up (×2) | 16 | 16 | |
14 | LDA-A | 16 | 3 | |
15 | Bilinear Up (×2) | 3 | 3 | |
16 | SoftMax | 3 | 1 |
Network | Inf. Time (in ms) | Params (in M) | GFLOPs | Iou (%) | F1 (%) |
---|---|---|---|---|---|
UNet | 54.96 | 31.04 | 54.74 | 80.17 | 88.71 |
DeepLabv3+ | 12.27 | 5.81 | 6.61 | 82.07 | 89.70 |
Fcn8s | 25.09 | 14.76 | 25.50 | 78.26 | 87.14 |
SegNet | 50.38 | 34.97 | 45.13 | 79.93 | 87.55 |
AttUNet | 79.03 | 34.88 | 66.63 | 80.92 | 89.20 |
UNext | 3.99 | 1.47 | 0.57 | 82.00 | 89.87 |
ResUnet | 113.63 | 14.76 | 97.16 | 77.86 | 87.27 |
LcmUNet (Ours) | 9.27 | 1.49 | 0.49 | 85.19 | 91.81 |
Network | ISIC2018 | Kvasir-SEG | BUSI | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Iou | F1 | Re | Pr | Iou | F1 | Re | Pr | Iou | F1 | Re | Pr | |
UNet | 80.17 | 88.71 | 86.61 | 91.50 | 76.20 | 86.13 | 85.82 | 87.55 | 52.12 | 67.80 | 65.21 | 76.36 |
DeepLabv3+ | 82.07 | 89.70 | 89.01 | 91.61 | 79.15 | 88.04 | 88.40 | 88.57 | 58.72 | 73.16 | 72.00 | 78.44 |
Fcn8s | 78.26 | 87.14 | 85.58 | 90.25 | 59.96 | 74.50 | 75.29 | 76.54 | 53.31 | 68.45 | 66.00 | 75.40 |
SegNet | 79.93 | 87.55 | 90.56 | 88.34 | 79.33 | 88.20 | 86.55 | 90.62 | 58.85 | 73.37 | 72.21 | 75.99 |
AttUNet | 80.92 | 89.20 | 88.48 | 90.45 | 78.48 | 88.55 | 87.01 | 89.13 | 56.30 | 71.34 | 67.84 | 77.14 |
UNext | 82.00 | 89.87 | 88.98 | 91.29 | 77.57 | 86.99 | 87.67 | 88.05 | 60.74 | 75.35 | 77.05 | 75.46 |
ResUnet | 77.86 | 87.27 | 86.30 | 88.34 | 67.10 | 79.93 | 78.55 | 83.02 | 44.23 | 60.79 | 57.15 | 73.25 |
LcmUNet (Ours) | 85.19 | 91.81 | 92.07 | 92.99 | 81.89 | 89.92 | 88.93 | 91.79 | 63.99 | 77.37 | 79.96 | 76.69 |
Network | Iou (%) | F1 (%) | Inf. Time (in ms) | Params (in M) | GFLOPs |
---|---|---|---|---|---|
UNet | 80.17 | 88.71 | 54.96 | 31.04 | 54.74 |
Unet + LDA-A | 81.69 | 89.63 | 8.20 | 0.51 | 0.36 |
Unet + LDA-A + B | 83.30 | 90.59 | 9.98 | 0.62 | 0.37 |
Unet + LDA-A + LMLP | 84.69 | 91.52 | 9.14 | 1.46 | 0.45 |
LcmUNet | 85.19 | 91.81 | 9.27 | 1.49 | 0.49 |
Number of Channels | Iou (%) | Re (%) | Pr (%) | F1 (%) |
---|---|---|---|---|
0 | 85.19 | 92.07 | 92.99 | 91.81 |
0.25 | 83.61 | 91.20 | 91.07 | 90.81 |
0.5 | 84.75 | 91.25 | 91.76 | 91.29 |
Settings | Iou (%) | Re (%) | Pr (%) | F1 (%) |
---|---|---|---|---|
0/0 | 81.55 | 90.57 | 90.57 | 89.50 |
0/1 | 82.27 | 91.34 | 90.16 | 89.90 |
1/0 | 83.00 | 91.19 | 90.42 | 91.19 |
1/1 | 85.19 | 92.07 | 92.99 | 91.81 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, S.; Niu, Y. LcmUNet: A Lightweight Network Combining CNN and MLP for Real-Time Medical Image Segmentation. Bioengineering 2023, 10, 712. https://doi.org/10.3390/bioengineering10060712
Zhang S, Niu Y. LcmUNet: A Lightweight Network Combining CNN and MLP for Real-Time Medical Image Segmentation. Bioengineering. 2023; 10(6):712. https://doi.org/10.3390/bioengineering10060712
Chicago/Turabian StyleZhang, Shuai, and Yanmin Niu. 2023. "LcmUNet: A Lightweight Network Combining CNN and MLP for Real-Time Medical Image Segmentation" Bioengineering 10, no. 6: 712. https://doi.org/10.3390/bioengineering10060712
APA StyleZhang, S., & Niu, Y. (2023). LcmUNet: A Lightweight Network Combining CNN and MLP for Real-Time Medical Image Segmentation. Bioengineering, 10(6), 712. https://doi.org/10.3390/bioengineering10060712