# Enhancing COVID-19 CT Image Segmentation: A Comparative Study of Attention and Recurrence in UNet Models

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Data and Methods

#### 2.1. Data

#### “OPTIMIZED” Dataset

#### 2.2. Data Normalization and Augmentation

#### 2.3. UNet Architecture

#### 2.4. UNet-Derived Models

#### 2.4.1. R2-UNet Architecture

#### 2.4.2. Attention-UNet Architecture

#### 2.4.3. R2-Attention UNet Architecture

#### 2.5. Training and Cross-Validation Scheme

^{3}. In this way, the stratification ensures that for each fold there is a proportional number of cases with diseased regions of different sizes. We trained each model on 3 folds, validated it on 1 fold, and then tested it on the leftover fold.

#### 2.6. Evaluation Metrics

_{m}is the Predicted Mask, GT

_{m}the Ground Truth Mask, TP the True Positive (i.e., $|{\mathrm{P}}_{m}\cap {\mathrm{GT}}_{m}|$), FP the False Positive, and FN the False Negative.

## 3. Results

#### 3.1. Convergence

#### 3.2. Quantitative Results and Comparisons

- UNet and Attention-UNet to evaluate the impact of adding the attention mechanism in the UNet;
- UNet with R2-UNet to evaluate the impact of adding the recurrent mechanism in the UNet;
- UNet with R2-Attention to evaluate the impact of adding both the attention and the recurrent mechanism in the UNet;
- Attention-UNet with R2-Attention UNet to evaluate the effect of adding recurrence in a UNet model that already had attention;
- R2-UNet with R2-Attention UNet to evaluate the effect of the addition of the attention in a UNet model that already had a recurrence.

^{3}), calculated on the ground truth.

^{3}. Also, R2-UNet obtained 3D Dice score values greater than 50%, but on patients with a disease volume greater than 1300 cm

^{3}. Finally, R2-Attention UNet worked generically much worse; in particular, there were numerous patients with 3D Dice scores close to 0%. These cases were mostly concentrated, but not limited, to low-volume lesions, i.e., smaller than 1000 cm

^{3}.

#### 3.3. Ablation Study

## 4. Discussion and Conclusions

## Supplementary Materials

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Maslove, D.M.; Sibley, S.; Boyd, J.G.; Goligher, E.C.; Munshi, L.; Bogoch, I.I.; Rochwerg, B. Complications of Critical COVID-19: Diagnostic and Therapeutic Considerations for the Mechanically Ventilated Patient. Chest
**2022**, 161, 989–998. [Google Scholar] [CrossRef] - Wikramaratna, P.; Paton, R.S.; Ghafari, M.; Lourenço, J. Estimating false-negative detection rate of SARS-CoV-2 by RT-PCR. Eurosurveillance
**2020**, 25, 50. [Google Scholar] [CrossRef] - Han, X.; Fan, Y.; Alwalid, O.; Li, N.; Jia, X.; Yuan, M.; Li, Y.; Cao, Y.; Gu, J.; Wu, H.; et al. Six-month Follow-up Chest CT Findings after Severe COVID-19 Pneumonia. Radiology
**2021**, 299, E177–E186. [Google Scholar] [CrossRef] - Dong, D.; Tang, Z.; Wang, S.; Hui, H.; Gong, L.; Lu, Y.; Xue, Z.; Liao, H.; Chen, F.; Yang, F.; et al. The Role of Imaging in the Detection and Management of COVID-19: A Review. IEEE Rev. Biomed. Eng.
**2021**, 14, 16–29. [Google Scholar] [CrossRef] - Ai, T.; Yang, Z.; Hou, H.; Zhan, C.; Chen, C.; Lv, W.; Tao, Q.; Sun, Z.; Xia, L. Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases. Radiology
**2020**, 296, E32–E40. [Google Scholar] [CrossRef] - Ye, Z.; Zhang, Y.; Wang, Y.; Huang, Z.; Song, B. Chest CT manifestations of new coronavirus disease 2019 (COVID-19): A pictorial review. Eur. Radiol.
**2020**, 30, 4381–4389. [Google Scholar] [CrossRef] - Xie, X.; Zhong, Z.; Zhao, W.; Zheng, C.; Wang, F.; Liu, J. Chest CT for typical coronavirus disease 2019 (COVID-19) pneumonia: Relationship to negative RT-PCR testing. Radiology
**2020**, 296, E41–E45. [Google Scholar] [CrossRef] - Laino, M.E.; Ammirabile, A.; Posa, A.; Cancian, P.; Shalaby, S.; Savevski, V.; Neri, E. The Applications of Artificial Intelligence in Chest Imaging of COVID-19 Patients: A Literature Review. Diagnostics
**2021**, 11, 1317. [Google Scholar] [CrossRef] - Kriza, C.; Amenta, V.; Zenié, A.; Panidis, D.; Chassaigne, H.; Urbán, P.; Holzwarth, U.; Sauer, A.V.; Reina, V.; Griesinger, C.B. Artificial intelligence for imaging-based COVID-19 detection: Systematic review comparing added value of AI versus human readers. Eur. J. Radiol.
**2021**, 145, 110028. [Google Scholar] [CrossRef] - Deng, H.; Li, X. AI-Empowered Computational Examination of Chest Imaging for COVID-19 Treatment: A Review. Front. Artif. Intell.
**2021**, 4, 612914. [Google Scholar] [CrossRef] [PubMed] - Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19. IEEE Rev. Biomed. Eng.
**2021**, 14, 4–15. [Google Scholar] [CrossRef] - Wang, J.; Yang, X.; Zhou, B.; Sohn, J.J.; Zhou, J.; Jacob, J.T.; Higgins, K.A.; Bradley, J.D.; Liu, T. Review of Machine Learning in Lung Ultrasound in COVID-19 Pandemic. J. Imaging
**2022**, 8, 65. [Google Scholar] [CrossRef] - Fan, D.P.; Zhou, T.; Ji, G.P.; Zhou, Y.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images. IEEE Trans. Med. Imaging
**2020**, 39, 2626–2637. [Google Scholar] [CrossRef] - Liu, J.; Dong, B.; Wang, S.; Cui, H.; Fan, D.P.; Ma, J.; Chen, G. COVID-19 lung infection segmentation with a novel two-stage cross-domain transfer learning framework. Med. Image Anal.
**2021**, 74, 102205. [Google Scholar] [CrossRef] - Kumar Singh, V.; Abdel-Nasser, M.; Pandey, N.; Puig, D. LungINFseg: Segmenting COVID-19 Infected Regions in Lung CT Images Based on a Receptive-Field-Aware Deep Learning Framework. Diagnostics
**2021**, 11, 158. [Google Scholar] [CrossRef] - Shamim, S.; Awan, M.; Zain, A.; Naseem, U.; Mohammed, M.; Zapirain, B. Automatic COVID-19 Lung Infection Segmentation through Modified Unet Model. J. Healthc. Eng.
**2022**, 2022, 6566982. [Google Scholar] [CrossRef] - Aswathy, A.L.; SS, V.C. Cascaded 3D UNet architecture for segmenting the COVID-19 infection from lung CT volume. Sci. Rep.
**2022**, 12, 3090. [Google Scholar] [CrossRef] - Saeedizadeh, N.; Minaee, S.; Kafieh, R.; Yazdani, S.; Sonka, M. COVID TV-Unet: Segmenting COVID-19 chest CT images using connectivity imposed Unet. Comput. Methods Programs Biomed. Update
**2021**, 1, 100007. [Google Scholar] [CrossRef] [PubMed] - Roth, H.; Xu, Z.; Diez, C.T.; Jacob, R.S.; Zember, J.; Molto, J.; Li, W.; Xu, S.; Turkbey, B.; Turkbey, E. et al. Rapid Artificial Intelligence Solutions in a Pandemic—The COVID-19-20 Lung CT Lesion Segmentation Challenge. Med. Image Anal.
**2022**, 82, 102605. [Google Scholar] [CrossRef] [PubMed] - Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv
**2018**, arXiv:1804.03999. [Google Scholar] [CrossRef] - Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.; Asari, V. Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation. arXiv
**2018**, arXiv:1802.06955. [Google Scholar] - Zuo, Q.; Chen, S.; Wang, Z. R2AU-Net: Attention Recurrent Residual Convolutional Neural Network for Multimodal Medical Image Segmentation. Secur. Commun. Netw.
**2021**, 2021, 6625688. [Google Scholar] [CrossRef] - Nodirov, J.; Abdusalomov, A.B.; Whangbo, T.K. Attention 3D U-Net with Multiple Skip Connections for Segmentation of Brain Tumor Images. Sensors
**2022**, 22, 6501. [Google Scholar] [CrossRef] [PubMed] - Yuan, W.; Peng, Y.; Guo, Y.; Ren, Y.; Xue, Q. DCAU-Net: Dense convolutional attention U-Net for segmentation of intracranial aneurysm images. Vis. Comput. Ind. Biomed. Art
**2022**, 5, 9. [Google Scholar] [CrossRef] [PubMed] - Chen, Y.; Zheng, C.; Zhou, T.; Feng, L.; Liu, L.; Zeng, Q.; Wang, G. A deep residual attention-based U-Net with a biplane joint method for liver segmentation from CT scans. Comput. Biol. Med.
**2023**, 152, 106421. [Google Scholar] [CrossRef] [PubMed] - Joseph Raj, A.N.; Zhu, H.; Khan, A.; Zhuang, Z.; Yang, Z.; Mahesh, V.; Karthik, G. ADID-UNET—A segmentation model for COVID-19 infection from lung CT scans. PeerJ Comput. Sci.
**2021**, 7, e349. [Google Scholar] [CrossRef] - Zhou, T.; Canu, S.; Ruan, S. Automatic COVID-19 CT segmentation using U-Net integrated spatial and channel attention mechanism. Int. J. Imaging Syst. Technol.
**2021**, 31, 16–27. [Google Scholar] [CrossRef] - Zhao, X.; Zhang, P.; Song, F.; Fan, G.; Sun, Y.; Wang, Y.; Tian, Z.; Zhang, L.; Zhang, G. D2A U-Net: Automatic Segmentation of COVID-19 Lesions from CT Slices with Dilated Convolution and Dual Attention Mechanism. arXiv
**2021**, arXiv:2102.05210. [Google Scholar] [CrossRef] - Ahmed, I.; Chehri, A.; Jeon, G. A Sustainable Deep Learning-Based Framework for Automated Segmentation of COVID-19 Infected Regions: Using U-Net with an Attention Mechanism and Boundary Loss Function. Electronics
**2022**, 11, 2296. [Google Scholar] [CrossRef] - Bougourzi, F.; Distante, C.; Dornaika, F.; Taleb-Ahmed, A. PDAtt-Unet: Pyramid Dual-Decoder Attention Unet for COVID-19 infection segmentation from CT-scans. Med. Image Anal.
**2023**, 86, 102797. [Google Scholar] [CrossRef] - Yin, S.; Deng, H.; Xu, Z.; Zhu, Q.; Cheng, J. SD-UNet: A Novel Segmentation Framework for CT Images of Lung Infections. Electronics
**2022**, 11, 130. [Google Scholar] [CrossRef] - Mubashar, M.; Ali, H.; Grönlund, C.; Azmat, S. R2U++: A multiscale recurrent residual U-Net with dense skip connections for medical image segmentation. Neural Comput. Appl.
**2022**, 34, 17723–17739. [Google Scholar] [CrossRef] [PubMed] - Fakhfakh, M.A.; Bouaziz, B.; Gargouri, F.; Chaâri, L. ProgNet: COVID-19 Prognosis Using Recurrent and Convolutional Neural Networks. Open Med. Imaging J.
**2020**, 12, 11–12. [Google Scholar] [CrossRef] - Chen, X.; Yao, L.; Zhang, Y. Residual Attention U-Net for Automated Multi-Class Segmentation of COVID-19 Chest CT Images. arXiv
**2020**, arXiv:2004.05645. [Google Scholar] - Xu, X.; Wen, Y.; Zhao, L.; Zhang, Y.; Zhao, Y.; Tang, Z.; Yang, Z.; Chen, C.Y.C. CARes-UNet: Content-aware residual UNet for lesion segmentation of COVID-19 from chest CT images. Med. Phys.
**2021**, 48, 7127–7140. [Google Scholar] [CrossRef] - Malhotra, P.; Gupta, S.; Koundal, D.; Zaguia, A.; Enbeyle, W. Deep Neural Networks for Medical Image Segmentation. J. Healthc. Eng.
**2022**, 2022, 9580991. [Google Scholar] [CrossRef] - Bertels, J.; Robben, D.; Lemmens, R.; Vandermeulen, D. Convolutional neural networks for medical image segmentation. arXiv
**2022**, arXiv:2211.09562. [Google Scholar] - Tilborghs, S.; Dirks, I.; Fidon, L.; Willems, S.; Eelbode, T.; Bertels, J.; Ilsen, B.; Brys, A.; Dubbeldam, A.; Buls, N.; et al. Comparative study of deep learning methods for the automatic segmentation of lung, lesion and lesion type in CT scans of COVID-19 patients. arXiv
**2020**, arXiv:2007.15546. [Google Scholar] [CrossRef] - Buongiorno, R.; Germanese, D.; Romei, C.; Tavanti, L.; Liperi, A.; Colantonio, S. UIP-Net: A Decoder-Encoder CNN for the Detection and Quantification of Usual Interstitial Pneumoniae Pattern in Lung CT Scan Images. In Proceedings of the Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, 10–15 January 2021; pp. 389–405. [Google Scholar] [CrossRef]
- Reinke, A.; Maier-Hein, L.; Christodoulou, E.; Glocker, B.; Scholz, P.; Isensee, F.; Kleesiek, J.; Kozubek, M.; Reyes, M.; Riegler, M.A.; et al. Metrics reloaded-a new recommendation framework for biomedical image analysis validation. In Proceedings of the Medical Imaging with Deep Learning, Zurich, Switzerland, 6–8 July 2022. [Google Scholar]

**Figure 1.**Manifestations of COVID-19-infected regions on an HRCT of a confirmed patient. Panel (

**a**) shows the original grey-level intensities, while in Panel (

**b**) the infected regions are manually enhanced by radiologists (GGO in blue and Consolidation in red).

**Figure 2.**Schematic summary of the pipeline we followed: We started by describing the internal and external datasets, then moved on to the description of the models before showing the training and test set-up.

**Figure 3.**An example of (

**a**) an image from the internal and (

**b**) the external dataset. The two images are distinguished by gray-level distributions (

**c**); thus, a preliminary step consisting of a histogram matching operation was necessary (

**d**).

**Figure 4.**Schematics of the UNet-based models trained and tested in this work: UNet (

**a**), R2-UNet (

**b**), Attention-UNet (

**c**), and R2-Attention UNet (

**d**).

**Figure 5.**Loss function trends in relation to the number of epochs during the training of the models. The continued dark and light orange line represents, for each epoch, the median loss value over the 5 folds on the training and validation sets, respectively. The dotted line represents the mean trend of the loss function on the validation set for each epoch. Finally, the shaded area in orange represents the range of epochs within which we implemented early stopping as described in Section 2.5.

**Figure 6.**Violin plots representing the median values and the interquartile ranges for 2D aggregated Dice score, Precision and Recall, and 3D aggregated Dice score, Precision, and Recall. Given the significant skewness of the distributions, we indicate that the scores follow a non-normal distribution, thus we choose to apply the Wilcoxon Signed-Rank test for non-parametric data to evaluate the significance of the differences in performance between the models.

**Figure 7.**Representation of the results of statistical analysis. The arrows with three different colors indicate the direction of improvement of three different metrics, i.e., yellow for 3D Dice Score, light orange for Precision, and dark orange for Recall. The dotted lines represent non-significant differences. In the boxes placed on each arrow, the difference in percentage is shown, and the p-value.

**Figure 8.**3D Dice score of each model, and on each patient. The yellow squares represent the values obtained from the UNet, the orange circles those from the Attention-UNet, the pink triangles from the R2-UNet, and finally the lilac rhombuses from the R2-Attention UNet. The curves are the Gaussian Process regressions on the 3D Dice score represented as a function of the volume of the disease. The coloured areas visually represent the 95% confidence interval of the respective curve. Finally, the horizontal dotted line reports the threshold of a Dice score equal to 50%, enhanced to better show those patients on which the models performed the worst.

**Figure 9.**Visual comparisons between the ground truths (green) and the predictions (yellow) of UNet, R2-UNet, Attention-UNet, and R2 Attention-UNet. The red circles contain the areas where the models overestimated the disease.

Reference | Dataset: n. Patients (n. Images) | Cross-Validation | Results (Dice Score) |
---|---|---|---|

[13] | Inf-Net: >40 (100) Semi-Inf-Net: 20 (1600) | No | Inf-Net: 68.2% Semi-Inf-Net: 73.9% |

[14] | 60 (4630) | No | 68.43% |

[15] | 20 (1800+) | No | 80.34% |

[16] | 40 (100) | N | 92.46% |

[17] | 20 | No | 82% |

[18] | 49 (929) | No | 86% |

[19] | >661 (295) | No | 75.4% (first ranked) |

[26] | >69 (1838) | No | 82% |

[27] | 69 (473) | No | 83.1% |

[28] | 38 (1745) | No | 72.98% |

[29] | >69 (3000 data augmentation) | Yes | 76% |

[30] | 219 (5199) | No | 77.60% |

[31] | >40 (1963) | Yes | 86.96% |

[32] | >40 (100) | Yes | 77.15% |

[33] | 60 (110) | No | 93.4% |

[34] | 60 (600 data augmentation) | Yes | 94% |

[35] | >230 (32,714) | Yes | 77.6% |

Characteristics | Median [IQR] |
---|---|

Number of slices | 296 [279–315] |

Number of diseased slices | 218 [203–246] |

Healthy slices over diseased slices (%) | 45.36% [19.31–48.28%] |

Ground truth area (mm^{2}) | 428.93 [4.33–25.81] |

Ground truth volume (mm^{3}) | 640,852.6 [262,534.03–1,253,504.56] |

Pixel spacing (mm) | 0.68 [0.62–0.72] |

Slice thickness (mm) | 1.44 [1.34–1.50] |

Slice dimensions | $512\times 512$ |

**Table 3.**Training and inference times and memory usage (in terms of RAM and maximum GPU consumption) for each model.

UNet | Attention-UNet | R2-UNet | R2-Att UNet | |
---|---|---|---|---|

Training Time | 16 h | 17 h 30 min | 31 h | 33 h 30 min |

Inference Time | 21 min | 16 min | 19 min | 17 min |

Trainable parameters | 1,946,305 | 1,978,900 | 5,973,889 | 6,006,484 |

RAM consumption | 3.16 GB | 3.99 GB | 11.67 GB | 12.50 GB |

Maximum GPU consumption | 1.28 GB | 2.24 GB | 5.20 GB | 6.40 GB |

**Table 4.**Median values and IQR of 2D Dice score computed on all the images of the test set. In green is the maximum value.

2D Dice Score | |
---|---|

UNet | 81.88% [63.73–91.63%] |

Attention-UNet | 81.93% [64.17–91.65%] |

R2-UNet | 72.38% [32.3–87.05%] |

R2-Attention UNet | 60.40% [0–84.46%] |

**Table 5.**Values of metrics for evaluating the performances of the models obtained by grouping the test set per patient at each fold. Highlighted in green are the maximum values for each metric, with an underline for the highest value of all metrics. All the values are expressed as median [IQR].

Dice Score 2D | Precision 2D | Recall 2D | Dice Score 3D | Precision 3D | Recall 3D | |
---|---|---|---|---|---|---|

UNet | 72.05 [64.23–78.15] | 85.45 [78.55–90.63] | 73.59[65.77–80.97] | 78.77 [73.20–85.27] | 88.52 [80.89–94.66] | 76.95 [67.60–84.42] |

Att UNet | 72.43[65.25–78.08] | 86.82[80.46–93.06] | 73.52 [65.11–80.20] | 79.86[73.35–85.62] | [83.25–95.99] | 73.64 [64.45–84.05] |

R2 UNet | 63.11 [50.97–71.13] | 79.00 [63.90–86.98] | 72.40 [59.48–84.58] | 72.27 [59.08–82.10] | 81.12 [54.87–89.66] | 78.47[64.76–83.65] |

Att+R2 UNet | 54.17 [29.41–68.39] | 75.47 [56.51–85.65] | 60.60 [43.47–73.17] | 67.42 [37.42–81.28] | 78.03 [49.93–90.16] | 59.96 [27.27–79.19] |

**Table 6.**Results of statistical analysis. The values in each cell represent the percentage increase in the metric. In red are highlighted the values with no significance.

3D Dice Score | 3D Precision | 3D Recall | |
---|---|---|---|

UNet vs. Attention-UNet (1) | 0.04% | 1.34% | 1.51% |

UNet vs. R2-UNet (2) | 6.19% | 5.32% | 0.86% |

UNet vs. R2-Attention UNet (3) | 6.21% | 6.98% | 8.09% |

Attention-UNet vs. R2-Attention UNet (4) | 6.44% | 8.39% | 6.37% |

R2-UNet vs. R2-Attention UNet (5) | 1.85% | 0.66% | 9.55% |

**Table 7.**Median values and interquartile ranges of 2D Dice score of each model under ablation study. The maximum values for each model are highlighted in green.

Model | 2D Dice Score |
---|---|

UNet | 79.51% [63.31–91.33%] |

Attention-UNet with A1 | 76.43% [41.80–87.81%] |

Attention-UNet with A1 and A2 | 82.20% [58.59–91.90%] |

Attention-UNet with A1, A2, and A3 | 83.14% [60.99–91.82%] |

R2-UNet with R1 | 68.85% [59.37–90.95%] |

R2-UNet with R1 and R2 | 69.43% [21.66–83.79%] |

R2-UNet with R1, R2 and R3 | 73.61% [28.98–87.33%] |

R2-Attention UNet with R1&A1 | 72.38% [28.82–85.77%] |

R2-Attention UNet with R1&A1 and R2&A2 | 70.51% [64.37–92.34%] |

R2-Attention UNet with R1&A1, R2&A2, and R3&A3 | 57.17% [23.09–83.96%] |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Buongiorno, R.; Del Corso, G.; Germanese, D.; Colligiani, L.; Python, L.; Romei, C.; Colantonio, S.
Enhancing COVID-19 CT Image Segmentation: A Comparative Study of Attention and Recurrence in UNet Models. *J. Imaging* **2023**, *9*, 283.
https://doi.org/10.3390/jimaging9120283

**AMA Style**

Buongiorno R, Del Corso G, Germanese D, Colligiani L, Python L, Romei C, Colantonio S.
Enhancing COVID-19 CT Image Segmentation: A Comparative Study of Attention and Recurrence in UNet Models. *Journal of Imaging*. 2023; 9(12):283.
https://doi.org/10.3390/jimaging9120283

**Chicago/Turabian Style**

Buongiorno, Rossana, Giulio Del Corso, Danila Germanese, Leonardo Colligiani, Lorenzo Python, Chiara Romei, and Sara Colantonio.
2023. "Enhancing COVID-19 CT Image Segmentation: A Comparative Study of Attention and Recurrence in UNet Models" *Journal of Imaging* 9, no. 12: 283.
https://doi.org/10.3390/jimaging9120283