Orga-Dete: An Improved Lightweight Deep Learning Model for Lung Organoid Detection and Classification
Abstract
1. Introduction
2. Methodology
2.1. YOLOv11n
2.2. Proposed Model Orga-Dete
2.2.1. Bi-Directional Feature Pyramid Network
2.2.2. Multi-Path Coordinate Attention
2.2.3. EMASlideLoss
2.2.4. Orga-Dete
3. Experiments and Results
3.1. Datasets
3.2. Training Strategies
3.3. Evaluation Metrics
- (1)
- Classification Error (Cls): The predicted box overlaps correctly with GT ( ) but is assigned an incorrect class label.
- (2)
- Localization Error (Loc): The class label is correct, but the box lacks sufficient spatial alignment with the GT ().
- (3)
- Classification + Localization Error (Cls + Loc): Both the class prediction and spatial alignment are incorrect, though the confidence score exceeds the background exclusion threshold ().
- (4)
- Duplicate Detection Error (Dupe): Multiple detection boxes correspond to a single GT () instance; only the highest-confidence box is valid, while the others are redundant.
- (5)
- Background Error (Bkg): the background is detected as a target, and the predicted bounding box with GT satisfies .
- (6)
- Missed Detection Error (Miss): A GT instance is not matched by any detection box and is not attributable to the preceding error categories.
3.4. Experimental Results and Analysis
3.4.1. Experimental Results and Comparative Analysis of Different Models
3.4.2. Comparative Experiments on Feature Pyramid Architectures
3.4.3. Comparative Experiments on Different Attention Mechanisms
3.4.4. Comparative Experiments on Different Loss Functions
3.4.5. Ablation Experiment Analysis
3.4.6. Performance on Other Organoid Datasets
4. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Jackson, E.; Lu, H. Three-dimensional models for studying development and disease: Moving on from organisms to organs-on-a-chip and organoids. Integr. Biol. 2016, 8, 672–683. [Google Scholar] [CrossRef]
- Zhao, Z.; Chen, X.; Dowbaj, A.M.; Sljukic, A.; Bratlie, K.; Lin, L.; Fong, E.L.S.; Balachander, G.M.; Chen, Z.; Soragni, A. Organoids. Nat. Rev. Methods Primers 2022, 2, 94. [Google Scholar] [CrossRef]
- Keshara, R.; Kim, Y.H.; Grapin-Botton, A. Organoid imaging: Seeing development and function. Annu. Rev. Cell Dev. Biol. 2022, 38, 447–466. [Google Scholar] [CrossRef]
- Park, T.; Kim, T.K.; Han, Y.D.; Kim, K.-A.; Kim, H.; Kim, H.S. Development of a deep learning based image processing tool for enhanced organoid analysis. Sci. Rep. 2023, 13, 19841. [Google Scholar] [CrossRef] [PubMed]
- Fei, K.; Zhang, J.; Yuan, J.; Xiao, P. Present application and perspectives of organoid imaging technology. Bioengineering 2022, 9, 121. [Google Scholar] [CrossRef] [PubMed]
- Borten, M.A.; Bajikar, S.S.; Sasaki, N.; Clevers, H.; Janes, K.A. Automated brightfield morphometry of 3D organoid populations by OrganoSeg. Sci. Rep. 2018, 8, 5319. [Google Scholar] [CrossRef] [PubMed]
- Bai, L.; Wu, Y.; Li, G.; Zhang, W.; Zhang, H.; Su, J. AI-enabled organoids: Construction, analysis, and application. Bioact. Mater. 2024, 31, 525–548. [Google Scholar] [CrossRef]
- Zhao, Z.-Q.; Zheng, P.; Xu, S.-t.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. In Proceedings of the Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
- Kassis, T.; Hernandez-Gordillo, V.; Langer, R.; Griffith, L.G. OrgaQuant: Human intestinal organoid localization and quantification using deep convolutional neural networks. Sci. Rep. 2019, 9, 12479. [Google Scholar] [CrossRef]
- Abdul, L.; Rajasekar, S.; Lin, D.S.; Raja, S.V.; Sotra, A.; Feng, Y.; Liu, A.; Zhang, B. Deep-LUMEN assay–human lung epithelial spheroid classification from brightfield images using deep learning. Lab Chip 2020, 20, 4623–4631. [Google Scholar] [CrossRef]
- Kegeles, E.; Naumov, A.; Karpulevich, E.A.; Volchkov, P.; Baranov, P. Convolutional neural networks can predict retinal differentiation in retinal organoids. Front. Cell. Neurosci. 2020, 14, 171. [Google Scholar] [CrossRef]
- Wang, X.; Liao, J.; Yue, G.; He, L.; Wang, T.; Zhou, G.; Lei, B. Induced pluripotent stem cells detection via ensemble Yolo network. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 31 October–4 November 2021; pp. 3738–3741. [Google Scholar]
- Li, X.; Xu, Z.; Shen, X.; Zhou, Y.; Xiao, B.; Li, T.-Q. Detection of cervical cancer cells in whole slide images using deformable and global context aware faster RCNN-FPN. Curr. Oncol. 2021, 28, 3585–3601. [Google Scholar] [CrossRef]
- Bian, X.; Li, G.; Wang, C.; Shen, S.; Liu, W.; Lin, X.; Chen, Z.; Cheung, M.; Luo, X. OrgaNet: A deep learning approach for automated evaluation of organoids viability in drug screening. In Proceedings of the Bioinformatics Research and Applications: 17th International Symposium, ISBRA 2021, Shenzhen, China, 26–28 November 2021; Proceedings 17; pp. 411–423. [Google Scholar]
- Powell, R.T.; Moussalli, M.J.; Guo, L.; Bae, G.; Singh, P.; Stephan, C.; Shureiqi, I.; Davies, P.J. deepOrganoid: A brightfield cell viability model for screening matrix-embedded organoids. SLAS Discov. 2022, 27, 175–184. [Google Scholar] [CrossRef]
- Okamoto, T.; Natsume, Y.; Doi, M.; Nosato, H.; Iwaki, T.; Yamanaka, H.; Yamamoto, M.; Kawachi, H.; Noda, T.; Nagayama, S. Integration of human inspection and artificial intelligence-based morphological typing of patient-derived organoids reveals interpatient heterogeneity of colorectal cancer. Cancer Sci. 2022, 113, 2693–2703. [Google Scholar] [CrossRef]
- Abdul, L.; Xu, J.; Sotra, A.; Chaudary, A.; Gao, J.; Rajasekar, S.; Anvari, N.; Mahyar, H.; Zhang, B. D-CryptO: Deep learning-based analysis of colon organoid morphology from brightfield images. Lab Chip 2022, 22, 4118–4128. [Google Scholar] [CrossRef] [PubMed]
- Du, X.; Cui, W.; Song, J.; Cheng, Y.; Qi, Y.; Zhang, Y.; Li, Q.; Zhang, J.; Sha, L.; Ge, J. Sketch the Organoids from Birth to Death–Development of an Intelligent OrgaTracker System for Multi-Dimensional Organoid Analysis and Recreation. bioRxiv 2022. [Google Scholar] [CrossRef]
- Domènech-Moreno, E.; Brandt, A.; Lemmetyinen, T.T.; Wartiovaara, L.; Mäkelä, T.P.; Ollila, S. Tellu–an object-detector algorithm for automatic classification of intestinal organoids. Dis. Models Mech. 2023, 16, dmm049756. [Google Scholar] [CrossRef] [PubMed]
- Yang, R.; Du, Y.; Kwan, W.; Yan, R.; Shi, Q.; Zang, L.; Zhu, Z.; Zhang, J.; Li, C.; Yu, Y. A quick and reliable image-based AI algorithm for evaluating cellular senescence of gastric organoids. Cancer Biol. Med. 2023, 20, 519–536. [Google Scholar] [CrossRef]
- Leng, B.; Jiang, H.; Wang, B.; Wang, J.; Luo, G. Deep-Orga: An improved deep learning-based lightweight model for intestinal organoid detection. Comput. Biol. Med. 2024, 169, 107847. [Google Scholar] [CrossRef] [PubMed]
- Huang, K.; Li, M.; Li, Q.; Chen, Z.; Zhang, Y.; Gu, Z. Image-based profiling and deep learning reveal morphological heterogeneity of colorectal cancer organoids. Comput. Biol. Med. 2024, 173, 108322. [Google Scholar] [CrossRef]
- Sun, Y.; Zhang, H.; Huang, F.; Gao, Q.; Li, P.; Li, D.; Luo, G. Deliod a lightweight detection model for intestinal organoids based on deep learning. Sci. Rep. 2025, 15, 5040. [Google Scholar] [CrossRef] [PubMed]
- Khanam, R.; Hussain, M. Yolov11: An overview of the key architectural enhancements. arXiv 2024. [Google Scholar] [CrossRef]
- Varghese, R.; Sambath, M. Yolov8: A novel object detection algorithm with enhanced performance and robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; pp. 1–6. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Jocher, G.; Stoken, A.; Borovec, J.; Changyu, L.; Hogan, A.; Diaconu, L.; Poznanski, J.; Yu, L.; Rai, P.; Ferriday, R. ultralytics/yolov5: v3.0; Zenodo: Geneva, Switzerland, 2020. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- Fang, H.; Liao, Z.; Wang, X.; Chang, Y.; Yan, L. Differentiated attention guided network over hierarchical and aggregated features for intelligent UAV surveillance. IEEE Trans. Ind. Inform. 2023, 19, 9909–9920. [Google Scholar] [CrossRef]
- Huang, J.; Zhang, W.; Jin, W.; Hu, H. Surface defect detection of planar optical components based on OPT-YOLO. Opt. Lasers Eng. 2025, 190, 108974. [Google Scholar] [CrossRef]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate attention for efficient mobile network design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13713–13722. [Google Scholar]
- Jiang, T.; Zhou, J.; Xie, B.; Liu, L.; Ji, C.; Liu, Y.; Liu, B.; Zhang, B. Improved YOLOv8 model for lightweight pigeon egg detection. Animals 2024, 14, 1226. [Google Scholar] [CrossRef]
- Yu, Z.; Huang, H.; Chen, W.; Su, Y.; Liu, Y.; Wang, X. Yolo-facev2: A scale and occlusion aware face detector. Pattern Recognit. 2024, 155, 110714. [Google Scholar] [CrossRef]
- Buslaev, A.; Iglovikov, V.I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; Kalinin, A.A. Albumentations: Fast and flexible image augmentations. Information 2020, 11, 125. [Google Scholar] [CrossRef]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J. Yolov10: Real-time end-to-end object detection. Adv. Neural Inf. Process. Syst. 2024, 37, 107984–108011. [Google Scholar]
- Tian, Y.; Ye, Q.; Doermann, D. Yolov12: Attention-centric real-time object detectors. arXiv 2025, arXiv:2502.12524. [Google Scholar] [CrossRef]
- Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. Detrs beat yolos on real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 16965–16974. [Google Scholar]
- Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Part v 13; pp. 740–755. [Google Scholar]
- Bolya, D.; Foley, S.; Hays, J.; Hoffman, J. Tide: A general toolbox for identifying object detection errors. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Part III 16; pp. 558–573. [Google Scholar]









| Software Environment | Hardware Environment |
|---|---|
| Operating system: Windows 11 | CPU: AMD R5-7500f |
| Programming language: Python 3.10 | GPU:RTX 4070Ti Super(16GB) × 1 |
| Deep learning framework: PyTorch 2.3.0 | |
| Accelerated environment: CUDA 12.1 |
| Parameters | Values |
|---|---|
| lr0 | 0.01 |
| lrf | 0.01 |
| Momentum | 0.937 |
| Weight decay | 0.0005 |
| Warmup epochs | 3 |
| Warmup momentum | 0.8 |
| Batch size | 8 |
| Epochs | 300 |
| Workers | 4 |
| Model | Optimizer | Initial Learning Rate | Final Learning Rate | Batch Size | Epoch Number |
|---|---|---|---|---|---|
| YOLOv8n | SGD | 0.01 | 0.01 | 8 | 300 |
| YOLOv10n | SGD | 0.01 | 0.01 | 8 | 300 |
| YOLOv11n | SGD | 0.01 | 0.01 | 8 | 300 |
| YOLOv12n | SGD | 0.01 | 0.01 | 8 | 300 |
| Faster R-CNN | Adam | 0.0001 | 0.01 | 8 | 300 |
| RTDETR-r18 | AdamW | 0.0001 | 1.0 | 8 | 300 |
| Ours | SGD | 0.01 | 0.01 | 8 | 300 |
| Model | Lumen | No Lumen | mAP@0.5 | APsmall | APmedium | APall |
|---|---|---|---|---|---|---|
| YOLOv8n | 74.0± 0.2% | 82.4 ± 0.3% | 78.2 ± 0.2% | 46.1 ± 0.1% | 60.3 ± 0.2% | 59.4 ± 0.2% |
| YOLOv10n | 70.6 ± 0.2% | 84.2 ± 0.1% | 77.3 ± 0.2% | 48.2 ± 0.2% | 55.7 ± 0.3% | 54.6 ± 0.2% |
| YOLOv11n (without data augmentation) | 65.8 ± 0.1% | 82.8 ± 0.2% | 74.3 ± 0.1% | 40.1 ± 0.2% | 55.8 ± 0.1% | 54.2 ± 0.2% |
| YOLOv11n | 70.8 ± 0.2% | 84.9 ± 0.2% | 77.9 ± 0.1% | 45.1 ± 0.2% | 58.3 ± 0.1% | 58.1 ± 0.2% |
| YOLOv12n | 70.1 ± 0.1% | 83.7 ± 0.1% | 76.9 ± 0.1% | 41.6 ± 0.2% | 54.4 ± 0.1% | 53.6 ± 0.1% |
| RTDETR-r18 | 70.8 ± 0.1% | 84.1 ± 0.2% | 77.4 ± 0.1% | 48.1 ± 0.1% | 59.5 ± 0.1% | 57.4 ± 0.1% |
| Faster R-CNN | 69.4 ± 0.2% | 72.6 ± 0.1% | 71.0 ± 0.1% | 26.1 ± 0.2% | 45.8 ± 0.3% | 44.1 ± 0.3% |
| Ours (without data augmentation) | 73.4 ± 0.1% | 83.5 ± 0.3% | 78.5 ± 0.2% | 49.8 ± 0.1% | 60.5 ± 0.1% | 59.6 ± 0.1% |
| Ours | 76.4 ± 0.1% | 86.5 ± 0.2% | 81.4 ± 0.2% | 51.6 ± 0.1% | 63.5 ± 0.2% | 62.8 ± 0.1% |
| Model | Cls | Loc | Both | Dupe | Bkg | Miss |
|---|---|---|---|---|---|---|
| YOLOv8n | 6.33 ± 0.44 | 0.70 ± 0.18 | 0.41 ± 0.12 | 0.00 ± 0.00 | 11.06 ± 0.74 | 0.00 ± 0.00 |
| YOLOv10n | 6.78 ± 0.47 | 0.60 ± 0.17 | 0.53 ± 0.14 | 0.01 ± 0.01 | 12.03 ± 0.78 | 0.00 ± 0.00 |
| YOLOv11n | 6.60 ± 0.45 | 0.63 ± 0.18 | 0.47 ± 0.10 | 0.01 ± 0.00 | 12.31 ± 0.81 | 0.00 ± 0.00 |
| YOLOv12n | 7.11 ± 0.49 | 0.68 ± 0.20 | 0.41 ± 0.13 | 0.00 ± 0.00 | 11.14 ± 0.76 | 0.00 ± 0.00 |
| RTDETR-r18 | 13.66 ± 0.75 | 0.57 ± 0.18 | 0.46 ± 0.14 | 0.01 ± 0.01 | 8.61 ± 0.66 | 0.00 ± 0.00 |
| Faster R-CNN | 32.88 ± 1.36 | 2.19 ± 0.37 | 1.73 ± 0.30 | 2.29 ± 0.37 | 2.84 ± 0.41 | 0.20 ± 0.12 |
| Ours | 6.23 ± 0.44 | 0.44 ± 0.15 | 0.32 ± 0.11 | 0.00 ± 0.00 | 10.61 ± 0.72 | 0.00 ± 0.00 |
| Model | Params (M) | GFLOPs (s) | FPS | Model Size (MB) |
|---|---|---|---|---|
| YOLOv8n | 3.0 | 8.1 | 303 ± 5 | 6.0 |
| YOLOv10n | 2.7 | 8.2 | 416 ± 8 | 5.5 |
| YOLOv11n | 2.58 | 6.3 | 280 ± 6 | 5.2 |
| YOLOv12n | 2.5 | 5.8 | 231 ± 4 | 5.2 |
| RTDETR-r18 | 19.9 | 56.9 | 100 ± 3 | 38.6 |
| Faster R-CNN | 28.28 | 149.6 | 69 ± 3 | 108 |
| Ours | 2.25 | 6.3 | 246 ± 5 | 4.6 |
| Model | mAP@0.5 | Params (M) | GFLOPs (s) |
|---|---|---|---|
| YOLOv11n | 77.9 ± 0.1% | 2.58 | 6.3 |
| YOLOv11n + BiFPN | 79.9 ± 0.1% | 1.92 | 6.3 |
| YOLOv11n + BiMAFPN | 78.8 ± 0.2% | 2.12 | 6.6 |
| YOLOv11n + EMBSFPN | 79.1 ± 0.1% | 2.60 | 6.8 |
| Model | mAP@0.5 | Params (M) | GFLOPs (s) |
|---|---|---|---|
| YOLOv11n + BiFPN | 79.9 ± 0.1% | 1.92 | 6.3 |
| YOLOv11n + BiFPN + CA | 80.1 ± 0.1% | 1.93 | 6.3 |
| YOLOv11n + BiFPN + CBAM | 78.3 ± 0.3% | 1.99 | 6.3 |
| YOLOv11n + BiFPN + SimAM | 77.1 ± 0.2% | 1.92 | 6.3 |
| YOLOv11n + BiFPN + MPCA | 80.7 ± 0.1% | 2.25 | 6.3 |
| IoU | Lumen | No Lumen | mAP@0.5 |
|---|---|---|---|
| CIoU | 75.7 ± 0.3% | 84.5 ± 0.2% | 80.1 ± 0.2% |
| EIoU | 72.6 ± 0.2% | 83.4 ± 0.0% | 78.0 ± 0.1% |
| GIoU | 75.1 ± 0.2% | 81.9 ± 0.2% | 78.5 ± 0.2% |
| Focal loss | 76.8 ± 0.1% | 85.4 ± 0.3% | 81.1 ± 0.2% |
| EMASlideLoss | 76.2 ± 0.2% | 86.5 ± 0.1% | 81.4 ± 0.1% |
| A | B | C | D | Lumen | No Lumen | mAP@0.5 | APsmall | APmedium | APall |
|---|---|---|---|---|---|---|---|---|---|
| ✓ | 70.8 ± 0.2% | 84.9 ± 0.2% | 77.9 ± 0.2% | 45.1 ± 0.2% | 58.8 ± 0.1% | 58.1 ± 0.1% | |||
| ✓ | ✓ | 73.8 ± 0.1% | 86.0 ± 0.2% | 79.9 ± 0.1% | 48.8 ± 0.3% | 61.4 ± 0.2% | 60.7 ± 0.2% | ||
| ✓ | ✓ | 72.3 ± 0.2% | 86.1 ± 0.1% | 79.2 ± 0.1% | 49.1 ± 0.1% | 59.9 ± 0.1% | 59.1 ± 0.2% | ||
| ✓ | ✓ | 74.9 ± 0.2% | 83.7 ± 0.1% | 79.3 ± 0.1% | 45.4 ± 0.2% | 59.0 ± 0.2% | 58.5 ± 0.1% | ||
| ✓ | ✓ | ✓ | 74.3 ± 0.1% | 87.0 ± 0.2% | 80.7 ± 0.1% | 50.7 ± 0.2% | 62.7 ± 0.2% | 62.1 ± 0.2% | |
| ✓ | ✓ | ✓ | ✓ | 76.2 ± 0.1% | 86.5 ± 0.2% | 81.4 ± 0.2% | 51.6 ± 0.1% | 63.5 ± 0.2% | 62.8 ± 0.1% |
| A | B | C | D | Params (M) | GFLOPs | FPS | Model Size (MB) |
|---|---|---|---|---|---|---|---|
| ✓ | 2.58 | 6.3 | 280 ± 6 | 5.2 | |||
| ✓ | ✓ | 1.92 | 6.3 | 269 ± 9 | 4 | ||
| ✓ | ✓ | 2.89 | 6.4 | 300 ± 13 | 5.3 | ||
| ✓ | ✓ | 2.58 | 6.3 | 276 ± 8 | 5.2 | ||
| ✓ | ✓ | ✓ | 2.25 | 6.3 | 243 ± 4 | 4.6 | |
| ✓ | ✓ | ✓ | ✓ | 2.25 | 6.3 | 246 ± 5 | 4.6 |
| Dataset | Detection Model | mAP@0.5 |
|---|---|---|
| Dataset (b) | YOLOv8n | 91.3 ± 0.4% |
| YOLOv11n | 91.6 ± 0.3% | |
| RTDETR-r18 YOLOv12n | 90.1 ± 0.2% 91.7 ± 0.3% | |
| Ours | 92.5 ± 0.3% | |
| Dataset (c) | YOLOv8n | 84.6 ± 0.2% |
| YOLOv11n RTDETR-r18 YOLOv12n Ours | 83.5 ± 0.2% 83.2 ± 0.1% 82.3 ± 0.3% 84.4 ± 0.3% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, X.; Gao, Q.; Zhang, H.; Min, F.; Li, D.; Luo, G. Orga-Dete: An Improved Lightweight Deep Learning Model for Lung Organoid Detection and Classification. Appl. Sci. 2025, 15, 8377. https://doi.org/10.3390/app15158377
Huang X, Gao Q, Zhang H, Min F, Li D, Luo G. Orga-Dete: An Improved Lightweight Deep Learning Model for Lung Organoid Detection and Classification. Applied Sciences. 2025; 15(15):8377. https://doi.org/10.3390/app15158377
Chicago/Turabian StyleHuang, Xuan, Qin Gao, Hanwen Zhang, Fuhong Min, Dong Li, and Gangyin Luo. 2025. "Orga-Dete: An Improved Lightweight Deep Learning Model for Lung Organoid Detection and Classification" Applied Sciences 15, no. 15: 8377. https://doi.org/10.3390/app15158377
APA StyleHuang, X., Gao, Q., Zhang, H., Min, F., Li, D., & Luo, G. (2025). Orga-Dete: An Improved Lightweight Deep Learning Model for Lung Organoid Detection and Classification. Applied Sciences, 15(15), 8377. https://doi.org/10.3390/app15158377

