Determination of Anteroposterior and Posteroanterior Imaging Positions on Chest X-Ray Images Using Deep Learning †
Abstract
1. Introduction
2. Method
2.1. Dataset
2.2. Fastai
2.3. Classification Studies
3. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hosch, R.; Kroll, L.; Nensa, F.; Koitka, S. Differentiation between Anteroposterior and Posteroanterior Chest X-Ray View Position with Convolutional Neural Networks. RoFo 2021, 193, 168–176. [Google Scholar] [CrossRef] [PubMed]
- Raoof, S.; Feigin, D.; Sung, A.; Raoof, S.; Irugulpati, L.; Rosenow, E.C. Interpretation of plain chest roentgenogram. Chest 2012, 141, 545–558. [Google Scholar] [CrossRef] [PubMed]
- Kim, T.K.; Yi, P.H.; Wei, J.; Shin, J.W.; Hager, G.; Hui, F.K.; Sair, H.I.; Lin, C.T. Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs. J. Digit. Imaging 2019, 32, 925–930. [Google Scholar] [CrossRef] [PubMed]
- Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3462–3471. [Google Scholar] [CrossRef]
- Nguyen, H.Q.; Lam, K.; Le, L.T.; Pham, H.H.; Tran, D.Q.; Nguyen, D.B.; Le, D.D.; Pham, C.M.; Tong, H.T.T.; Dinh, D.H.; et al. VinDr-CXR: An open dataset of chest X-rays with radiologist’s annotations. Sci. Data 2022, 9, 429. [Google Scholar] [CrossRef] [PubMed]
- Howard, J.; Gugger, S. Fastai: A Layered API for Deep Learning. Information 2020, 11, 108. [Google Scholar] [CrossRef]
Feature | Value |
---|---|
Total Number of Images | 112,120 |
Number of PA Images (%) | 67,310 (60.1%) |
Number of AP Images (%) | 44,810 (39.9%) |
Number of Male Patient Images | 63,340 |
Number of Female Patient Images | 48,780 |
Number of Unique Patients | 30,805 |
Average Number of Images per Patient | 3.64 |
Metric | EfficientNetV2-S | ConvNeXt-Tiny | DenseNet-121 | ResNet-18 | ResNet-34 | ResNet-50 |
---|---|---|---|---|---|---|
Train Loss | 0.016856 | 0.033227 | 0.024416 | 0.000312 | 0.000239 | 0.013660 |
Valid Loss | 0.016391 | 0.021557 | 0.017844 | 0.030812 | 0.023618 | 0.023737 |
Error Rate | 0.003985 | 0.005157 | 0.004395 | 0.004044 | 0.003458 | 0.004747 |
Accuracy | 0.996015 | 0.994843 | 0.995605 | 0.995956 | 0.996542 | 0.995253 |
F1 Score | 0.996668 | 0.995690 | 0.996325 | 0.996620 | 0.997109 | 0.996035 |
Cohen’s Kappa | 0.991711 | 0.989271 | 0.990857 | 0.991588 | 0.992808 | 0.990121 |
Recall | 0.997646 | 0.996959 | 0.997254 | 0.997842 | 0.998038 | 0.997940 |
Precision | 0.995693 | 0.994423 | 0.995398 | 0.995401 | 0.996182 | 0.994137 |
Brier Score Loss | 0.003985 | 0.005157 | 0.004395 | 0.004044 | 0.003458 | 0.004747 |
Balanced Accuracy | 0.995620 | 0.994330 | 0.995205 | 0.995499 | 0.996180 | 0.994602 |
ROC-AUC | 0.999510 | 0.998567 | 0.999384 | 0.999151 | 0.999552 | 0.999357 |
Metric | EfficientNetV2-S | ConvNeXt-Tiny | DenseNet-121 | ResNet-18 | ResNet-34 | ResNet-50 |
---|---|---|---|---|---|---|
Train Loss | 0.024058 | 0.012695 | 0.009012 | 0.027934 | 0.022322 | 0.025057 |
Valid Loss | 0.017230 | 0.020557 | 0.017776 | 0.021910 | 0.014919 | 0.022696 |
Error Rate | 0.004220 | 0.004806 | 0.004161 | 0.005333 | 0.003751 | 0.005392 |
Accuracy | 0.995780 | 0.995194 | 0.995839 | 0.994667 | 0.996249 | 0.994608 |
F1 Score | 0.996473 | 0.995982 | 0.996520 | 0.995542 | 0.996863 | 0.995496 |
Cohen’s Kappa | 0.991223 | 0.990004 | 0.991347 | 0.988906 | 0.992199 | 0.988781 |
Recall | 0.997548 | 0.996959 | 0.997057 | 0.996665 | 0.997548 | 0.997254 |
Precision | 0.995400 | 0.995007 | 0.995983 | 0.994422 | 0.996180 | 0.993745 |
Brier Score Loss | 0.004220 | 0.004806 | 0.004161 | 0.005333 | 0.003751 | 0.005392 |
Balanced Accuracy | 0.995352 | 0.994767 | 0.995544 | 0.994183 | 0.995935 | 0.993967 |
ROC-AUC | 0.999479 | 0.998876 | 0.999429 | 0.999303 | 0.999726 | 0.998885 |
Model | Aug. | TP | FP | FN | TN | Accuracy | Precision | Recall | F1 Score | ROC-AUC |
---|---|---|---|---|---|---|---|---|---|---|
EfficientNetV2-S | No | 6569 | 35 | 38 | 9849 | 0.995573 | 0.995574 | 0.995573 | 0.995574 | 0.999037 |
Yes | 6569 | 35 | 41 | 9846 | 0.995391 | 0.995392 | 0.995391 | 0.995392 | 0.998984 | |
ConvNeXt-Tiny | No | 6564 | 40 | 38 | 9849 | 0.995270 | 0.995270 | 0.995270 | 0.995270 | 0.998387 |
Yes | 6567 | 37 | 41 | 9846 | 0.995270 | 0.995271 | 0.995270 | 0.995270 | 0.998546 | |
DenseNet121 | No | 6568 | 36 | 35 | 9852 | 0.995695 | 0.995695 | 0.995695 | 0.995695 | 0.999136 |
Yes | 6570 | 34 | 42 | 9845 | 0.995391 | 0.995393 | 0.995391 | 0.995392 | 0.999081 | |
ResNet18 | No | 6574 | 30 | 33 | 9854 | 0.996180 | 0.995457 | 0.995005 | 0.995231 | 0.995985 |
Yes | 6564 | 40 | 32 | 9855 | 0.995634 | 0.993943 | 0.995149 | 0.994545 | 0.995553 | |
ResNet34 | No | 6575 | 29 | 32 | 9855 | 0.996301 | 0.995609 | 0.995157 | 0.995383 | 0.996111 |
Yes | 6577 | 27 | 38 | 9849 | 0.996058 | 0.995912 | 0.994255 | 0.995083 | 0.995761 | |
ResNet50 | No | 6573 | 31 | 33 | 9854 | 0.996119 | 0.995306 | 0.995005 | 0.995155 | 0.995934 |
Yes | 6562 | 42 | 29 | 9858 | 0.995695 | 0.993640 | 0.995600 | 0.994619 | 0.995679 |
Metric | Values |
---|---|
Train Loss | 0.009089 |
Valid Loss | 0.004667 |
Error Rate | 0.001176 |
Accuracy | 0.998824 |
F1 Score | 0.999018 |
Cohen’s Kappa | 0.997554 |
Recall | 0.999607 |
Precision | 0.998430 |
Brier Score Loss | 0.998430 |
Balanced Accuracy | 0.998633 |
ROC-AUC | 0.999977 |
TP | FP | FN | TN | Accuracy | Precision | Recall | F1 Score | ROC-AUC |
---|---|---|---|---|---|---|---|---|
6575 | 29 | 29 | 9858 | 0.99649 | 0.99561 | 0.99561 | 0.99561 | 0.99935 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gökçimen, F.; İnner, A.B.; Çakır, Ö. Determination of Anteroposterior and Posteroanterior Imaging Positions on Chest X-Ray Images Using Deep Learning. Eng. Proc. 2025, 104, 58. https://doi.org/10.3390/engproc2025104058
Gökçimen F, İnner AB, Çakır Ö. Determination of Anteroposterior and Posteroanterior Imaging Positions on Chest X-Ray Images Using Deep Learning. Engineering Proceedings. 2025; 104(1):58. https://doi.org/10.3390/engproc2025104058
Chicago/Turabian StyleGökçimen, Fatih, Alpaslan Burak İnner, and Özgür Çakır. 2025. "Determination of Anteroposterior and Posteroanterior Imaging Positions on Chest X-Ray Images Using Deep Learning" Engineering Proceedings 104, no. 1: 58. https://doi.org/10.3390/engproc2025104058
APA StyleGökçimen, F., İnner, A. B., & Çakır, Ö. (2025). Determination of Anteroposterior and Posteroanterior Imaging Positions on Chest X-Ray Images Using Deep Learning. Engineering Proceedings, 104(1), 58. https://doi.org/10.3390/engproc2025104058