Deep Learning-Based Automated Measurement of Murine Bone Length in Radiographs
Abstract
:1. Introduction
2. Materials and Methods
2.1. Datasets and Preparation
2.1.1. X-ray Datasets and Annotations for Algorithm Design
2.1.2. Mouse Mutagenesis, Genotyping, and Radiography
2.2. Bone Length Measurement Model
2.2.1. Keypoint Detection
- Objectness ROI header: The objectness ROI header attempts to detect the mouse’s position and extract the region-of-interest (ROI) containing the target keypoints. We utilized the default MultiAlign-ROI implementation from the Faster R-CNN algorithm [25]. The MultiAlign-ROI technique maps candidate regions of varying sizes, generated by the region proposal network, to a fixed-size feature map. In contrast to traditional methods, MultiAlign-ROI utilizes bilinear interpolation, a method of interpolation that estimates new pixel values by taking a weighted average of surrounding pixel values in an image, for precise alignment of ROIs to the feature map grid, thereby mitigating spatial misalignment and enhancing the accuracy of object detection and recognition.
- Keypoint detection header: The keypoint detection header locates the keypoint coordinates inside the ROI produced by the objectness ROI header. It transfers the backbone feature maps within the ROI region into heatmaps and returns the coordinates with the brightest points. The network was constructed with 8 repeats of convolution + BatchNormalization + Activation blocs, followed by a transposed convolution layer to generate the keypoint coordinates and confidence scores.
- Loss functions: We utilized a combined loss function of objectness loss, class loss, and keypoint loss.
- Objectness loss is a cross-entropy loss used to measure the accuracy between mouse objects and non-mouse objects. Specifically, objectness loss quantifies the model’s performance by computing the difference between the predicted probability distribution of mouse objects and non-mouse objects and the actual labels. For each predicted bounding box, the model outputs a probability distribution indicating whether the box contains a mouse object. Objectness loss compares the model’s predictions with the ground truth labels, assigning lower loss if the prediction is correct and higher loss otherwise.
- Class loss evaluates the accuracy of classification by comparing the predicted class and the actual class labels. For each detected object, the model generates a probability distribution indicating the likelihood of belonging to each class. Class loss compares the model’s predicted probability distribution with the one-hot encoding of the actual class labels and calculates their cross-entropy to assess classification accuracy. In this paper, we use cross-entropy loss to measure the accuracy of the model’s predicted mouse body positions. By minimizing class loss, the model can learn more discriminative features to distinguish between different object categories. This helps improve the accuracy and robustness of the object detection system, enabling it to accurately identify and classify objects.
- Keypoint loss evaluates the accuracy of predicted keypoint coordinates by comparing them with the actual labeled keypoint coordinates. For each keypoint, the model generates a predicted coordinate representing its position. Keypoint loss assesses the prediction accuracy by calculating the mean squared error (MSE) between the predicted and actual coordinates. MSE reflects the distance between the predicted and actual values, indicating the accuracy of the prediction. By minimizing keypoint loss, the model can learn to predict object keypoints more accurately. This helps improve the model’s performance in tasks such as pose estimation and keypoint detection, enabling it to accurately locate and identify key parts of objects.
2.2.2. Pre-Processing
2.2.3. Post-Processing
2.3. Implementation
2.4. Evaluations
3. Results
3.1. Keypoint Detection Accuracy
3.2. Bone Length Accuracy
3.3. Consistency across a Large Discovery Cohort
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Rios, J.J.; Denton, K.; Russell, J.; Kozlitina, J.; Ferreira, C.R.; Lewanda, A.F.; Mayfield, J.E.; Moresco, E.; Ludwig, S.; Tang, M.; et al. Germline Saturation Mutagenesis Induces Skeletal Phenotypes in Mice. J. Bone Miner. Res. 2021, 36, 1548–1565. [Google Scholar] [CrossRef]
- Rios, J.J.; Denton, K.; Yu, H.; Manickam, K.; Garner, S.; Russell, J.; Ludwig, S.; Rosenfeld, J.A.; Liu, P.; Munch, J.; et al. Saturation mutagenesis defines novel mouse models of severe spine deformity. Dis. Model. Mech. 2021, 14, dmm048901. [Google Scholar] [CrossRef] [PubMed]
- Fitzgerald, R. Error in radiology. Clin. Radiol. 2001, 56, 938–946. [Google Scholar] [CrossRef] [PubMed]
- Shen, D.; Wu, G.; Suk, H.I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [PubMed]
- Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A guide to deep learning in healthcare. Nat. Med. 2019, 25, 24–29. [Google Scholar] [CrossRef] [PubMed]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.; van Ginneken, B.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed]
- Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. AI in health and medicine. Nat. Med. 2022, 28, 31–38. [Google Scholar] [CrossRef]
- van de Worp, W.R.P.H.; van der Heyden, B.; Lappas, G.; van Helvoort, A.; Theys, J.; Schols, A.M.W.J.; Verhaegen, F.; Langen, R.C.J. Deep Learning Based Automated Orthotopic Lung Tumor Segmentation in Whole-Body Mouse CT-Scans. Cancers 2021, 13, 4585. [Google Scholar] [CrossRef]
- Chen, R.J.; Ding, T.; Lu, M.Y.; Williamson, D.F.K.; Jaume, G.; Song, A.H.; Chen, B.W.; Zhang, A.D.; Shao, D.; Shaban, M.; et al. Towards a general-purpose foundation model for computational pathology. Nat. Med. 2024, 30, 850–862. [Google Scholar] [CrossRef]
- Lu, M.Y.; Chen, B.W.; Williamson, D.F.K.; Chen, R.J.; Liang, I.; Ding, T.; Jaume, G.; Odintsov, I.; Le, L.P.; Gerber, G.; et al. A visual-language foundation model for computational pathology. Nat. Med. 2024, 30, 863–874. [Google Scholar] [CrossRef]
- Madabhushi, A.; Lee, G. Image analysis and machine learning in digital pathology: Challenges and opportunities. Med. Image Anal. 2016, 33, 170–175. [Google Scholar] [CrossRef] [PubMed]
- Yang, R.X.; Yu, Y.Y. Artificial Convolutional Neural Network in Object Detection and Semantic Segmentation for Medical Imaging Analysis. Front. Oncol. 2021, 11, 638182. [Google Scholar] [CrossRef] [PubMed]
- Gong, Y.F.; Luo, J.; Shao, H.L.; Li, Z.X. A transfer learning object detection model for defects detection in X-ray images of spacecraft composite structures. Compos. Struct. 2022, 284, 115136. [Google Scholar] [CrossRef]
- Ma, C.J.; Zhuo, L.; Li, J.F.; Zhang, Y.T.; Zhang, J. EAOD-Net: Effective anomaly object detection networks for X-ray images. IET Image Process 2022, 16, 2638–2651. [Google Scholar] [CrossRef]
- Hardalaç, F.; Uysal, F.; Peker, O.; Çiçeklidag, M.; Tolunay, T.; Tokgöz, N.; Kutbay, U.; Demirciler, B.; Mert, F. Fracture Detection in Wrist X-ray Images Using Deep Learning-Based Object Detection Models. Sensors 2022, 22, 1285. [Google Scholar] [CrossRef] [PubMed]
- Ramachandran, S.S.; George, J.; Skaria, S.; Varun, V.V. Using YOLO based deep learning network for real time detection and localization of lung nodules from low dose CT scans. In Medical Imaging 2018: Computer-Aided Diagnosis; SPIE: Bellingham, WA, USA, 2018; Volume 10575, pp. 347–355. [Google Scholar] [CrossRef]
- Gupta, K.; Bajaj, V. Deep learning models-based CT-scan image classification for automated screening of COVID-19. Biomed. Signal Process 2023, 80, 104268. [Google Scholar] [CrossRef] [PubMed]
- Yang, A.Q.; Pan, F.; Saragadam, V.; Dao, D.; Hui, Z.; Chang, J.H.R.; Sankaranarayanan, A.C. SliceNets—A Scalable Approach for Object Detection in 3D CT Scans. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 335–344. [Google Scholar] [CrossRef]
- Chegraoui, H.; Philippe, C.; Dangouloff-Ros, V.; Grigis, A.; Calmon, R.; Boddaert, N.; Frouin, F.; Grill, J.; Frouin, V. Object Detection Improves Tumour Segmentation in MR Images of Rare Brain Tumours. Cancers 2021, 13, 6113. [Google Scholar] [CrossRef] [PubMed]
- Terzi, R. An Ensemble of Deep Learning Object Detection Models for Anatomical and Pathological Regions in Brain MRI. Diagnostics 2023, 13, 1494. [Google Scholar] [CrossRef] [PubMed]
- Dubost, F.; Adams, H.; Yilmaz, P.; Bortsova, G.; van Tulder, G.; Ikram, M.A.; Niessen, W.; Vernooij, M.W.; de Bruijne, M. Weakly supervised object detection with 2D and 3D regression neural networks. Med. Image Anal. 2020, 65, 101767. [Google Scholar] [CrossRef]
- Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H. Artificial intelligence in radiology. Nat. Rev. Cancer 2018, 18, 500–510. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar] [CrossRef]
- Kamath, A.; Singh, M.; LeCun, Y.; Synnaeve, G.; Misra, I.; Carion, N. Mdetr-modulated detection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1780–1790. [Google Scholar] [CrossRef]
- Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; Dai, J. Deformable detr: Deformable transformers for end-to-end object detection. arXiv 2020, arXiv:2010.04159. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (Cvpr 2017), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Shafiee, M.J.; Chywl, B.; Li, F.; Wong, A. Fast YOLO: A fast you only look once system for real-time embedded object detection in video. arXiv 2017, arXiv:1709.05943. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (Cvpr), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Ren, S.Q.; He, K.M.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef] [PubMed]
- Wang, T.; Bu, C.H.; Hildebrand, S.; Jia, G.; Siggs, O.M.; Lyon, S.; Pratt, D.; Scott, L.; Russell, J.; Ludwig, S.; et al. Probability of phenotypically detectable protein damage by ENU-induced mutations in the Mutagenetix database. Nat. Commun. 2018, 9, 441. [Google Scholar] [CrossRef] [PubMed]
- Xu, D.; Lyon, S.; Bu, C.H.; Hildebrand, S.; Choi, J.H.; Zhong, X.; Liu, A.; Turer, E.E.; Zhang, Z.; Russell, J.; et al. Thousands of induced germline mutations affecting immune cells identified by automated meiotic mapping coupled with machine learning. Proc. Natl. Acad. Sci. USA 2021, 118, e2106786118. [Google Scholar] [CrossRef] [PubMed]
- Tan, M.X.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.M.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (Cvpr 2017), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar] [CrossRef]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- PyTorch Vision. Available online: https://github.com/pytorch/vision/tree/main/torchvision/models/detection (accessed on 21 December 2023).
No. Images | No. Mice | |
---|---|---|
Training | 64 | 126 |
Validation | 11 | 22 |
Testing | 19 | 37 |
External Testing | 592 | 1178 |
Objectness Accuracy | Keypoints MSE | |
---|---|---|
Validation | 1.0000 | 0.0257 |
Testing | 1.0000 | 0.0242 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rong, R.; Denton, K.; Jin, K.W.; Quan, P.; Wen, Z.; Kozlitina, J.; Lyon, S.; Wang, A.; Wise, C.A.; Beutler, B.; et al. Deep Learning-Based Automated Measurement of Murine Bone Length in Radiographs. Bioengineering 2024, 11, 670. https://doi.org/10.3390/bioengineering11070670
Rong R, Denton K, Jin KW, Quan P, Wen Z, Kozlitina J, Lyon S, Wang A, Wise CA, Beutler B, et al. Deep Learning-Based Automated Measurement of Murine Bone Length in Radiographs. Bioengineering. 2024; 11(7):670. https://doi.org/10.3390/bioengineering11070670
Chicago/Turabian StyleRong, Ruichen, Kristin Denton, Kevin W. Jin, Peiran Quan, Zhuoyu Wen, Julia Kozlitina, Stephen Lyon, Aileen Wang, Carol A. Wise, Bruce Beutler, and et al. 2024. "Deep Learning-Based Automated Measurement of Murine Bone Length in Radiographs" Bioengineering 11, no. 7: 670. https://doi.org/10.3390/bioengineering11070670
APA StyleRong, R., Denton, K., Jin, K. W., Quan, P., Wen, Z., Kozlitina, J., Lyon, S., Wang, A., Wise, C. A., Beutler, B., Yang, D. M., Li, Q., Rios, J. J., & Xiao, G. (2024). Deep Learning-Based Automated Measurement of Murine Bone Length in Radiographs. Bioengineering, 11(7), 670. https://doi.org/10.3390/bioengineering11070670