Anatomical Alignment of Femoral Radiographs Enables Robust AI-Powered Detection of Incomplete Atypical Femoral Fractures
Abstract
1. Introduction
- We propose a novel preprocessing pipeline for anatomical normalization of femur radiographic images.
- We demonstrate the profound performance impact of our data-driven approach through a comprehensive analysis of detection architectures.
- We demonstrate successful zero-shot generalization for IAFF detection in a completely different imaging domain, validating the robustness of our method.
2. Related Works
2.1. AI for AFF and IAFF Diagnosis
2.2. Preprocessing Techniques in Medical Images
2.3. Domain Generalization Strategy
3. Methods
3.1. Automated Preprocessing Pipeline for Femoral Alignment
- Step 1: Femur Segmentation
- Step 2: Femur Axis Extraction
- Step 3: Axis Alignment via Rotation
- Step 4: Region of Interest Cropping and Normalization
3.2. Detection Models
3.2.1. Faster R-CNN
3.2.2. YOLO
4. Experiments
4.1. Datasets
4.1.1. Femoral X-Ray
4.1.2. Alignment X-Ray
4.1.3. Standing AP X-Ray
4.2. Training Details
4.3. Evaluation Metrics
5. Results
5.1. Analysis of Data Changes After Preprocessing
5.2. Bone Segmentation
5.3. Analysis of Preprocessing Effect Within the Source Domain
5.4. Visualization of Femoral and Alignment X-Ray
5.5. Error Analysis
5.6. Zero-Shot Evaluation of Standing AP X-Ray
5.7. Visualization of Standing AP X-Ray
5.8. Ablation Study on Anchor Sizes
5.9. Computational Performance
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| IAFF | Incomplete Atypical Femoral Fracture |
| RANSAC | Random Sample Consensus |
| TP | True Positive |
| TN | True Negative |
| FP | False Positive |
| FN | False Negative |
References
- Schilcher, J.; Aspenberg, P. Incidence of stress fractures of the femoral shaft in women treated with bisphosphonate. Acta Orthop. 2009, 80, 413–415. [Google Scholar] [CrossRef]
- Dell, R.M.; Adams, A.L.; Greene, D.F.; Funahashi, T.T.; Silverman, S.L.; Eisemon, E.O.; Zhou, H.; Burchette, R.J.; Ott, S.M. Incidence of atypical nontraumatic diaphyseal fractures of the femur. J. Bone Miner. Res. 2012, 27, 2544–2550. [Google Scholar] [CrossRef] [PubMed]
- Adler, R.A.; El-Hajj Fuleihan, G.; Bauer, D.C.; Camacho, P.M.; Clarke, B.L.; Clines, G.A.; Compston, J.E.; Drake, M.T.; Edwards, B.J.; Favus, M.J.; et al. Managing osteoporosis in patients on long-term bisphosphonate treatment: Report of a task force of the American Society for Bone and Mineral Research. J. Bone Miner. Res. 2016, 31, 16–35. [Google Scholar] [CrossRef]
- van de Laarschot, D.M.; McKenna, M.J.; Abrahamsen, B.; Langdahl, B.; Cohen-Solal, M.; Guañabens, N.; Eastell, R.; Ralston, S.H.; Zillikens, M.C. Medical management of patients after atypical femur fractures: A systematic review and recommendations from the European Calcified Tissue Society. J. Clin. Endocrinol. Metab. 2020, 105, 1682–1699. [Google Scholar] [CrossRef]
- Bégin, M.J.; Audet, M.C.; Chevalley, T.; Portela, M.; Padlina, I.; Hannouche, D.; Ing Lorenzini, K.; Meier, R.; Peter, R.; Uebelhart, B.; et al. Fracture risk following an atypical femoral fracture. J. Bone Miner. Res. 2020, 37, 87–94. [Google Scholar] [CrossRef] [PubMed]
- Cheung, A.M.; McKenna, M.J.; van de Laarschot, D.M.; Zillikens, M.C.; Peck, V.; Srighanthan, J.; Lewiecki, E.M. Detection of atypical femur fractures. J. Clin. Densitom. 2019, 22, 506–516. [Google Scholar] [CrossRef]
- Kim, T.; Moon, N.H.; Goh, T.S.; Jung, I.D. Detection of incomplete atypical femoral fracture on anteroposterior radiographs via explainable artificial intelligence. Sci. Rep. 2023, 13, 10415. [Google Scholar] [CrossRef]
- Schilcher, J.; Nilsson, A.; Andlid, O.; Eklund, A. Fusion of electronic health records and radiographic images for a multimodal deep learning prediction model of atypical femur fractures. Comput. Biol. Med. 2024, 168, 107704. [Google Scholar] [CrossRef] [PubMed]
- Tanzi, L.; Vezzetti, E.; Moreno, R.; Moos, S. X-ray bone fracture classification using deep learning: A baseline for designing a reliable approach. Appl. Sci. 2020, 10, 1507. [Google Scholar] [CrossRef]
- Zdolsek, G.; Chen, Y.; Bögl, H.P.; Wang, C.; Woisetschläger, M.; Schilcher, J. Deep neural networks with promising diagnostic accuracy for the classification of atypical femoral fractures. Acta Orthop. 2021, 92, 394–400. [Google Scholar] [CrossRef]
- Murphy, E.; Ehrhardt, B.; Gregson, C.L.; von Arx, O.; Hartley, A.; Whitehouse, M.; Thomas, M.; Stenhouse, G.; Chesser, T.; Budd, C.; et al. Machine learning outperforms clinical experts in classification of hip fractures. Sci. Rep. 2022, 12, 2058. [Google Scholar] [CrossRef]
- Wang, C.T.; Huang, B.; Thogiti, N.; Zhu, W.X.; Chang, C.H.; Pao, J.L.; Lai, F. Successful real-world application of an osteoarthritis classification deep-learning model using 9210 knees—An orthopedic surgeon’s view. J. Orthop. Res.® 2023, 41, 737–746. [Google Scholar] [CrossRef] [PubMed]
- Teng, Y.; Pan, D.; Zhao, W. Application of deep learning ultrasound imaging in monitoring bone healing after fracture surgery. J. Radiat. Res. Appl. Sci. 2023, 16, 100493. [Google Scholar] [CrossRef]
- Guan, H.; Liu, M. Domain adaptation for medical image analysis: A survey. IEEE Trans. Biomed. Eng. 2021, 69, 1173–1185. [Google Scholar] [CrossRef]
- Yıldız Potter, İ.; Yeritsyan, D.; Mahar, S.; Kheir, N.; Vaziri, A.; Putman, M.; Rodriguez, E.K.; Wu, J.; Nazarian, A.; Vaziri, A. Proximal femur fracture detection on plain radiography via feature pyramid networks. Sci. Rep. 2024, 14, 12046. [Google Scholar] [CrossRef]
- Kuo, R.Y.; Harrison, C.; Curran, T.A.; Jones, B.; Freethy, A.; Cussons, D.; Stewart, M.; Collins, G.S.; Furniss, D. Artificial intelligence in fracture detection: A systematic review and meta-analysis. Radiology 2022, 304, 50–62. [Google Scholar] [CrossRef]
- Valliani, A.A.; Gulamali, F.F.; Kwon, Y.J.; Martini, M.L.; Wang, C.; Kondziolka, D.; Chen, V.J.; Wang, W.; Costa, A.B.; Oermann, E.K. Deploying deep learning models on unseen medical imaging using adversarial domain adaptation. PLoS ONE 2022, 17, e0273262. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Nguyen, H.H.; Le, D.T.; Shore-Lorenti, C.; Chen, C.; Schilcher, J.; Eklund, A.; Zebaze, R.; Milat, F.; Sztal-Mazer, S.; Girgis, C.M.; et al. AFFnet-a deep convolutional neural network for the detection of atypical femur fractures from anteriorposterior radiographs. Bone 2024, 187, 117215. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning. PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Kanopoulos, N.; Vasanthavada, N.; Baker, R.L. Design of an image edge detection filter using the Sobel operator. IEEE J. Solid-State Circuits 1988, 23, 358–367. [Google Scholar] [CrossRef]
- Chang, J.; Lee, J.; Kwon, D.; Lee, J.H.; Lee, M.; Jeong, S.; Kim, J.W.; Jung, H.; Oh, C.W. Context-Aware Level-Wise Feature Fusion Network with Anomaly Focus for Precise Classification of Incomplete Atypical Femoral Fractures in X-Ray Images. Mathematics 2024, 12, 3613. [Google Scholar] [CrossRef]
- Spanos, N.; Arsenos, A.; Theofilou, P.A.; Tzouveli, P.; Voulodimos, A.; Kollias, S. Complex Style Image Transformations for Domain Generalization in Medical Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 5036–5045. [Google Scholar]
- Erdaş, Ç.B. Automated fracture detection in the ulna and radius using deep learning on upper extremity radiographs. Jt. Dis. Relat. Surg. 2023, 34, 598. [Google Scholar] [CrossRef]
- Yoon, J.S.; Oh, K.; Shin, Y.; Mazurowski, M.A.; Suk, H.I. Domain generalization for medical image analysis: A review. Proc. IEEE 2024, 112, 1583–1609. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep High-Resolution Representation Learning for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 3349–3364. [Google Scholar] [CrossRef] [PubMed]
- Van der Walt, S.; Schönberger, J.L.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J.D.; Yager, N.; Gouillart, E.; Yu, T. scikit-image: Image processing in Python. PeerJ 2014, 2, e453. [Google Scholar] [CrossRef]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar] [CrossRef]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Zhang, Z.; Lin, Z.; Wu, Z.; Liu, J. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Jocher, G.; Qiu, J.; Chaurasia, A. Ultralytics YOLO. 2023. Available online: https://www.ultralytics.com/events/yolovision/2023 (accessed on 15 January 2025).
- Robbins, H.; Monro, S. A stochastic approximation method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]










| Model | Dice Score | HD95 |
|---|---|---|
| FCN8 (ResNet50) | 0.8793 (±0.0088) | 162.86 (±18.86) |
| U-Net | 0.9200 (±0.0101) | 156.21 (±16.87) |
| HRNet w32 | 0.9566 (±0.0029) | 31.43 (±11.05) |
| Model | Data Type | mAP @50 | mAP @75 | mAP @50–95 | |
|---|---|---|---|---|---|
| Type | Backbone or Version | ||||
| Faster R-CNN | ResNet50 | S | 0.7089 (±0.1213) | 0.3576 (±0.1261) | 0.3277 (±0.0828) |
| A | 0.744 (±0.0613) | 0.4056 (±0.1918) | 0.4028 (±0.0998) | ||
| ResNet101 | S | 0.7545 (±0.0857) | 0.4021 (±0.1108) | 0.4104 (±0.0696) | |
| A | 0.772 (±0.0923) | 0.3926 (±0.0794) | 0.4092 (±0.0538) | ||
| ResNet50 FPN | S | 0.7893 (±0.0604) | 0.3585 (±0.0914) | 0.4264 (±0.0577) | |
| A | 0.8069 (±0.0979) | 0.559 (±0.1034) | 0.5061 (±0.0612) | ||
| ResNet101 FPN | S | 0.8361 (±0.1302) | 0.4807 (±0.1393) | 0.4645 (±0.0935) | |
| A | 0.8388 (±0.1115) | 0.5054 (±0.1339) | 0.4867 (±0.0792) | ||
| ResNeXt-101-64x4d | S | 0.8015 (±0.0716) | 0.4017 (±0.1644) | 0.3949 (±0.1033) | |
| A | 0.8147 (±0.0619) | 0.4229 (±0.1309) | 0.4075 (±0.0715) | ||
| ResNeXt-101-32x8d | S | 0.7471 (±0.0545) | 0.3901 (±0.0595) | 0.3999 (±0.059) | |
| A | 0.7675 (±0.1154) | 0.3484 (±0.1155) | 0.3869 (±0.067) | ||
| ResNeXt-101-64x4d FPN | S | 0.8508 (±0.0451) | 0.5177 (±0.1067) | 0.4825 (±0.0577) | |
| A | 0.8524 (±0.0933) | 0.6498 (±0.15) | 0.5438 (±0.0511) | ||
| ResNeXt-101-32x8d FPN | S | 0.8078 (±0.0375) | 0.335 (±0.1452) | 0.4227 (±0.0782) | |
| A | 0.8143 (±0.0849) | 0.5314 (±0.1442) | 0.4824 (±0.0659) | ||
| Yolo | 8n | S | 0.9207 (±0.0443) | 0.6837 (±0.0427) | 0.5927 (±0.0173) |
| A | 0.9392 (±0.0344) | 0.6902 (±0.1358) | 0.5881 (±0.0538) | ||
| 8s | S | 0.9219 (±0.0404) | 0.6323 (±0.1303) | 0.5746 (±0.0378) | |
| A | 0.9521 (±0.035) | 0.7848 (±0.0502) | 0.645 (±0.0296) | ||
| 9t | S | 0.9201 (±0.031) | 0.7102 (±0.1565) | 0.5985 (±0.0558) | |
| A | 0.9442 (±0.0428) | 0.7421 (±0.0598) | 0.6148 (±0.0225) | ||
| 9s | S | 0.9378 (±0.0412) | 0.693 (±0.1176) | 0.5955 (±0.0452) | |
| A | 0.9569 (±0.0292) | 0.701 (±0.0755) | 0.6246 (±0.0224) | ||
| 10n | S | 0.9105 (±0.0463) | 0.742 (±0.1141) | 0.6198 (±0.0463) | |
| A | 0.9246 (±0.0449) | 0.7894 (±0.0428) | 0.6472 (±0.0513) | ||
| 10s | S | 0.9076 (±0.0235) | 0.5941 (±0.0744) | 0.5618 (±0.0172) | |
| A | 0.9352 (±0.0306) | 0.7525 (±0.0693) | 0.627 (±0.0446) | ||
| 11n | S | 0.9207 (±0.0377) | 0.7216 (±0.1192) | 0.6181 (±0.0524) | |
| A | 0.935 (±0.0661) | 0.7727 (±0.098) | 0.6389 (±0.0708) | ||
| 11s | S | 0.9096 (±0.0582) | 0.7072 (±0.1499) | 0.6171 (±0.0765) | |
| A | 0.9499 (±0.0164) | 0.7775 (±0.1415) | 0.6377 (±0.0507) | ||
| Model | Data Type | Precision | Recall | F1 Score | |
|---|---|---|---|---|---|
| Type | Backbone or Version | ||||
| Faster R-CNN | ResNet50 | S | 0.809 (±0.0909) | 0.654 (±0.1246) | 0.7144 (±0.0812) |
| A | 0.8939 (±0.0719) | 0.682 (±0.1096) | 0.7685 (±0.0751) | ||
| ResNet101 | S | 0.8757 (±0.0526) | 0.706 (±0.0847) | 0.7786 (±0.0575) | |
| A | 0.8521 (±0.112) | 0.732 (±0.1055) | 0.7799 (±0.0715) | ||
| ResNet50 FPN | S | 0.8397 (±0.1061) | 0.756 (±0.0814) | 0.7914 (±0.0693) | |
| A | 0.94 (±0.0679) | 0.714 (±0.1207) | 0.8083 (±0.096) | ||
| ResNet101 FPN | S | 0.8539 (±0.1715) | 0.798 (±0.132) | 0.8163 (±0.1273) | |
| A | 0.9508 (±0.0582) | 0.738 (±0.145) | 0.8206 (±0.0885) | ||
| ResNeXt-101-64x4d | S | 0.8767 (±0.0679) | 0.744 (±0.0945) | 0.8036 (±0.0783) | |
| A | 0.9623 (±0.0454) | 0.704 (±0.0757) | 0.8108 (±0.0533) | ||
| ResNeXt-101-32x8d | S | 0.7637 (±0.4306) | 0.596 (±0.3346) | 0.6678 (±0.3784) | |
| A | 0.9216 (±0.0836) | 0.79 (±0.0752) | 0.8471 (±0.053) | ||
| ResNeXt-101-64x4d FPN | S | 0.9303 (±0.0507) | 0.796 (±0.0669) | 0.8548 (±0.0265) | |
| A | 0.9375 (±0.0512) | 0.802 (±0.1126) | 0.8597 (±0.0649) | ||
| ResNeXt-101-32x8d FPN | S | 0.8745 (±0.0556) | 0.768 (±0.0587) | 0.7993 (±0.0301) | |
| A | 0.8751 (±0.0845) | 0.74 (±0.1143) | 0.8133 (±0.0802) | ||
| Yolo | 8n | S | 0.946 (±0.0755) | 0.8738 (±0.0803) | 0.9064 (±0.0633) |
| A | 0.954 (±0.0565) | 0.8959 (±0.0427) | 0.9226 (±0.0284) | ||
| 8s | S | 0.9302 (±0.0378) | 0.8405 (±0.0934) | 0.8797 (±0.0387) | |
| A | 0.9433 (±0.042) | 0.9102 (±0.0408) | 0.9259 (±0.0334) | ||
| 9t | S | 0.9474 (±0.0566) | 0.8843 (±0.0324) | 0.9134 (±0.024) | |
| A | 0.9475 (±0.0599) | 0.8941 (±0.0778) | 0.9194 (±0.065) | ||
| 9s | S | 0.9144 (±0.0472) | 0.9135 (±0.0569) | 0.9133 (±0.0443) | |
| A | 0.9668 (±0.0324) | 0.9094 (±0.0505) | 0.9369 (±0.0388) | ||
| 10n | S | 0.8952 (±0.0541) | 0.8732 (±0.0752) | 0.8827 (±0.052) | |
| A | 0.913 (±0.0641) | 0.8843 (±0.066) | 0.8977 (±0.0592) | ||
| 10s | S | 0.9045 (±0.052) | 0.8511 (±0.0386) | 0.8762 (±0.0343) | |
| A | 0.9237 (±0.0406) | 0.8849 (±0.062) | 0.9035 (±0.0485) | ||
| 11n | S | 0.9365 (±0.0388) | 0.8862 (±0.0676) | 0.9097 (±0.0448) | |
| A | 0.9178 (±0.0831) | 0.8856 (±0.0629) | 0.9001 (±0.0617) | ||
| 11s | S | 0.9026 (±0.0761) | 0.8532 (±0.0897) | 0.8756 (±0.0745) | |
| A | 0.9279 (±0.0347) | 0.8986 (±0.0422) | 0.9121 (±0.0233) | ||
| Model | mAP @50 | mAP @75 | mAP @50–95 | |
|---|---|---|---|---|
| Type | Backbone or Version | |||
| Faster R-CNN | ResNet50 | 0.4576 (±0.1223) | 0.2475 (±0.0428) | 0.1609 (±0.0252) |
| ResNet101 | 0.5155 (±0.1222) | 0.2486 (±0.0347) | 0.1527 (±0.0504) | |
| ResNet50 FPN | 0.6507 (±0.2082) | 0.2694 (±0.0272) | 0.2479 (±0.078) | |
| ResNet101 FPN | 0.7209 (±0.2081) | 0.301 (±0.0321) | 0.2772 (±0.0612) | |
| ResNeXt-101-64x4d | 0.6776 (±0.1077) | 0.2937 (±0.05) | 0.2629 (±0.0514) | |
| ResNeXt-101-32x8d | 0.7259 (±0.0939) | 0.2973 (±0.0787) | 0.2594 (±0.0681) | |
| ResNeXt-101-64x4d FPN | 0.7776 (±0.1077) | 0.2937 (±0.05) | 0.2629 (±0.0514) | |
| ResNeXt-101-32x8d FPN | 0.7959 (±0.0939) | 0.2973 (±0.0787) | 0.2594 (±0.0681) | |
| Yolo | 8n | 0.8594 (±0.0696) | 0.3155 (±0.0978) | 0.4075 (±0.0414) |
| 8s | 0.8892 (±0.0445) | 0.215 (±0.0317) | 0.3768 (±0.0183) | |
| 9t | 0.8447 (±0.0748) | 0.2574 (±0.0436) | 0.396 (±0.0354) | |
| 9s | 0.881 (±0.0528) | 0.2425 (±0.0825) | 0.3868 (±0.0543) | |
| 10n | 0.8687 (±0.0433) | 0.3602 (±0.0971) | 0.451 (±0.0238) | |
| 10s | 0.8924 (±0.0531) | 0.44 (±0.0458) | 0.4616 (±0.0226) | |
| 11n | 0.8862 (±0.0247) | 0.4602 (±0.1015) | 0.46 (±0.0371) | |
| 11s | 0.884 (±0.0148) | 0.2861 (±0.0458) | 0.4154 (±0.0148) | |
| Model | Precision | Recall | F1 Score | |
|---|---|---|---|---|
| Type | Backbone or Version | |||
| Faster R-CNN | ResNet50 | 0.4678 (±0.1867) | 0.444 (±0.1876) | 0.4503 (±0.1746) |
| ResNet101 | 0.4605 (±0.052) | 0.432 (±0.1475) | 0.4289 (±0.0668) | |
| ResNet50 FPN | 0.6667 (±0.1012) | 0.601 (±0.0976) | 0.6423 (±0.1444) | |
| ResNet101 FPN | 0.7166 (±0.1422) | 0.766 (±0.0422) | 0.7444 (±0.0511) | |
| ResNeXt-101-64x4d | 0.8015 (±0.1225) | 0.678 (±0.0977) | 0.721 (±0.0686) | |
| ResNeXt-101-32x8d | 0.755 (±0.1168) | 0.73 (±0.1237) | 0.7305 (±0.0743) | |
| ResNeXt-101-64x4d FPN | 0.8051 (±0.1315) | 0.698 (±0.0986) | 0.741 (±0.0786) | |
| ResNeXt-101-32x8d FPN | 0.755 (±0.1168) | 0.73 (±0.1237) | 0.7305 (±0.0743) | |
| Yolo | 8n | 0.8188 (±0.0241) | 0.8352 (±0.0769) | 0.8258 (±0.0459) |
| 8s | 0.9102 (±0.0355) | 0.8036 (±0.0975) | 0.8495 (±0.0403) | |
| 9t | 0.7949 (±0.0899) | 0.8369 (±0.1034) | 0.8099 (±0.0688) | |
| 9s | 0.8698 (±0.1091) | 0.8074 (±0.1066) | 0.8325 (±0.0821) | |
| 10n | 0.7986 (±0.1001) | 0.8506 (±0.0462) | 0.8212 (±0.0631) | |
| 10s | 0.8733 (±0.096) | 0.8049 (±0.03) | 0.8354 (±0.0453) | |
| 11n | 0.8832 (±0.0516) | 0.8257 (±0.0234) | 0.8531 (±0.0329) | |
| 11s | 0.9337 (±0.0393) | 0.7782 (±0.0228) | 0.8488 (±0.0284) | |
| Model | Data Type | Training Time | GPU Memory | ||
|---|---|---|---|---|---|
| Type | Backbone or Version | Parameters | |||
| Faster R-CNN | ResNet50 | 78.99M | S | 48.3 | 15.5 |
| A | 33.1 | 8.6 | |||
| ResNet101 | 97.98M | S | 61.1 | 19 | |
| A | 34.3 | 9.6 | |||
| ResNet50 FPN | 40.9M | S | 46.6 | 11.4 | |
| A | 20.9 | 3.9 | |||
| ResNet101 FPN | 59.89M | S | 56.7 | 14.9 | |
| A | 27.1 | 5.3 | |||
| ResNeXt-101-64x4d | 136.89M | S | 97.9 | 27 | |
| A | 44.8 | 12.1 | |||
| ResNeXt-101-32x8d | 142.22M | S | 130.6 | 28.6 | |
| A | 51.6 | 13.7 | |||
| ResNeXt-101-64x4d FPN | 98.79M | S | 93 | 24.8 | |
| A | 34.8 | 8 | |||
| ResNeXt-101-32x8d FPN | 104.13M | S | 126.2 | 25.2 | |
| A | 42.6 | 8.1 | |||
| Yolo | 8n | 3M | S | 10.8 | 1.9 |
| A | 9.9 | 1.2 | |||
| 8s | 11.2M | S | 10.9 | 2.6 | |
| A | 10.8 | 1.9 | |||
| 9t | 2M | S | 20 | 2.3 | |
| A | 20.7 | 1.4 | |||
| 9s | 7.1M | S | 21.6 | 3.8 | |
| A | 21.6 | 2.3 | |||
| 10n | 2.3M | S | 36 | 2.5 | |
| A | 75.6 | 1.6 | |||
| 10s | 7.2M | S | 39.6 | 4 | |
| A | 86.4 | 2.4 | |||
| 11n | 2.6M | S | 28.8 | 2.3 | |
| A | 68 | 1.3 | |||
| 11s | 9.4M | S | 32.4 | 3.5 | |
| A | 72 | 2.1 | |||
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kwon, D.; Lee, J.-H.; Kim, J.-W.; Kim, J.-W.; Yoon, S.-j.; Jeong, S.; Oh, C.-W. Anatomical Alignment of Femoral Radiographs Enables Robust AI-Powered Detection of Incomplete Atypical Femoral Fractures. Mathematics 2025, 13, 3720. https://doi.org/10.3390/math13223720
Kwon D, Lee J-H, Kim J-W, Kim J-W, Yoon S-j, Jeong S, Oh C-W. Anatomical Alignment of Femoral Radiographs Enables Robust AI-Powered Detection of Incomplete Atypical Femoral Fractures. Mathematics. 2025; 13(22):3720. https://doi.org/10.3390/math13223720
Chicago/Turabian StyleKwon, Doyoung, Jin-Han Lee, Joon-Woo Kim, Ji-Wan Kim, Sun-jung Yoon, Sungmoon Jeong, and Chang-Wug Oh. 2025. "Anatomical Alignment of Femoral Radiographs Enables Robust AI-Powered Detection of Incomplete Atypical Femoral Fractures" Mathematics 13, no. 22: 3720. https://doi.org/10.3390/math13223720
APA StyleKwon, D., Lee, J.-H., Kim, J.-W., Kim, J.-W., Yoon, S.-j., Jeong, S., & Oh, C.-W. (2025). Anatomical Alignment of Femoral Radiographs Enables Robust AI-Powered Detection of Incomplete Atypical Femoral Fractures. Mathematics, 13(22), 3720. https://doi.org/10.3390/math13223720

