Next Article in Journal
Diagnostic, Prognostic, and Predictive Tissue Biomarkers in Urothelial Carcinoma In Situ: A Narrative Review
Previous Article in Journal
Identifying Molecular Changes in Giardia lamblia Stages Using Hyperspectral Raman Microscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning for Cervical Spine Radiography: Automated Measurement of Intervertebral and Neural Foraminal Distances

1
Program on Semiconductor Manufacturing Technology, Academy of Innovative Semiconductor and Sustainable Manufacturing, National Cheng Kung University, Tainan City 701401, Taiwan
2
Department of Neurosurgery, Linkou Chang Gung Memorial Hospital, Taoyuan City 333423, Taiwan
3
Department of Electrical Engineering, Ming Chi University of Technology, New Taipei City 243303, Taiwan
4
Department of Electronic Engineering, Feng Chia University, Taichung City 40724, Taiwan
5
Department of Medical Education, Chang Gung Memorial Hospital Linkou, Taoyuan City 333423, Taiwan
6
Department of Information Management, Chung Yuan Christian University, Taoyuan City 320317, Taiwan
7
Department of Electronic Engineering, National Cheng Kung University, Tainan City 701401, Taiwan
8
Ateneo Laboratory for Intelligent Visual Environments, Department of Information Systems and Computer Science, Ateneo de Manila University, Quezon City 1108, Philippines
*
Authors to whom correspondence should be addressed.
Diagnostics 2025, 15(17), 2162; https://doi.org/10.3390/diagnostics15172162
Submission received: 22 July 2025 / Revised: 20 August 2025 / Accepted: 25 August 2025 / Published: 26 August 2025
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

Background/Objectives: The precise localization of cervical vertebrae in X-ray imaging was essential for effective diagnosis and treatment planning, particularly as the prevalence of cervical degenerative conditions increased with an aging population. Vertebrae from C2 to C7 were commonly affected by disorders such as ossification of the posterior longitudinal ligament (OPLL) and nerve compression caused by posterior osteophytes, necessitating thorough evaluation. However, manual annotation remained a major aspect of traditional clinical procedures, making it challenging to manage increasing patient volumes and large-scale medical imaging data. Methods: To address this issue, this study presented an automated approach for localizing cervical vertebrae and measuring neural foraminal distance. The proposed technique analyzed the neural foramen distance and intervertebral space using image enhancement to determine the degree of nerve compression. YOLOv8 was employed to detect and segment the cervical vertebrae. Moreover, by integrating automated cervical spine analysis with advanced imaging technologies, the system enabled rapid detection of abnormal intervertebral disc gaps, facilitating early identification of degenerative changes. Results: According to the results, the system achieved a spine localization accuracy of 99.5%, representing an 11.7% improvement over existing approaches. Notably, it outperformed previous methods by 66.67% in recognizing the C7 vertebra, achieving a perfect 100% accuracy. Conclusions: Furthermore, the system significantly streamlined the diagnostic workflow by processing each X-ray image in just 17.9 milliseconds. This approach markedly improved overall diagnostic efficiency.

1. Introduction

As the global population continues to grow, the increasing demand for medical services has become one of the most pressing challenges in modern healthcare. To alleviate the workload of healthcare professionals and enhance clinical efficiency, interdisciplinary collaboration between medicine and technology has become increasingly essential. Artificial intelligence (AI) has significantly advanced the automation of medical diagnostics and assisted in early-stage evaluations, thereby improving the overall efficiency of medical consultations [1]. In particular, deep learning techniques [2] play a vital role in computer-aided diagnosis (CAD), especially in the field of neurosurgery. AI models such as YOLO and Faster R-CNN have been widely adopted for the detection and localization of cervical and spinal diseases [3,4,5,6], enabling accurate vertebrae localization and significantly advancing diagnostic precision in clinical practice.
Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and X-ray imaging are commonly employed diagnostic tools for cervical spine disorders [7,8]. Among them, X-ray imaging offers several advantages, including fast acquisition, low cost, and minimal radiation exposure [9,10]. Despite its simplicity, the X-ray provides essential structural information such as vertebral bodies, intervertebral discs, and neural foramina, supporting the diagnosis of cervical conditions including fractures, disc herniation, the ossification of the posterior longitudinal ligament (OPLL), and foraminal stenosis [11,12,13]. Due to these characteristics, the X-ray remains a critical tool for initial assessment and disease progression monitoring, enabling timely diagnosis and treatment planning that can ultimately improve clinical efficiency and patient outcomes.
In cervical spine imaging analysis, the intervertebral distance [14] serves as a crucial quantitative marker for evaluating intervertebral disc degeneration, spinal stability, and spinal stenosis. Accurate measurement of this distance plays a key role in diagnosing cervical spine disorders and assessing the severity of structural abnormalities. A reduction in intervertebral spacing is commonly associated with degenerative disc disease or disc herniation, which can alter the biomechanical load distribution along the spine, potentially resulting in chronic pain, limited mobility, or neurological symptoms. Additionally, anterior–posterior vertebral distance measurements allow clinicians to assess the impact of structural changes such as bone spurs or OPLL on surrounding neural elements. A significantly narrowed distance may indicate spinal cord compression, which can lead to neurological deficits, including limb weakness, sensory disturbances, or chronic discomfort, further impairing a patient’s quality of life.
While X-ray imaging remains essential for cervical spine assessment, existing X-ray-based localization methods predominantly focus on the C3 to C7 vertebrae, as shown in Figure 1, and still encounter significant challenges, particularly in accurately identifying the C7 and C2 vertebrae. Precise localization of C7 is often hindered by low image contrast and potential shoulder overlap, which limits the effectiveness of traditional techniques. Furthermore, identifying the C2 vertebra (axis) poses an even greater difficulty due to its close anatomical connection with C1 (atlas) and its unique morphological characteristics, making accurate detection nearly impossible using conventional methods.
To address these limitations, this study proposes an automated system for measuring intervertebral distances and vertebra-to-neural foramen distances in cervical spine radiographs (CSRs). The system operates in two stages. First, it localizes cervical vertebrae and neural foramina by using YOLOv8 [15,16] to automatically identify regions of interest and by training two dedicated models for vertebra and foramen localization. Second, once the vertebrae and neural foramina have been localized, it computes intervertebral and vertebra-to-neural foramen distances. Beyond achieving high accuracy, the proposed approach markedly streamlines the localization and measurement workflow, reducing processing time and the need for manual annotation. In contrast to traditional annotation-heavy methods, the system enables automated, efficient CSR analysis and distance calculation, thereby improving clinicians’ annotation throughput and diagnostic efficiency and allowing greater focus on treatment planning. Overall, this study contributes to the advancement of automated cervical spine analysis, offering a reliable and practical tool to support clinical decision making.

2. Method

The flowchart of the overall system proposed in this study was shown in Figure 2. The cervical spine region was first located and cropped to ensure accurate analysis. The YOLOv8 model was then employed to simultaneously detect and label vertebrae from C2 to C7 and neural foramina, while also measuring the distance between each vertebra and its corresponding neural foramen. These measurements provided critical information for evaluating degenerative changes and potential nerve compression. The identified vertebrae served as references for further analysis, in which intervertebral distances were precisely calculated using image enhancement and coordinate transformation.

2.1. Image Preprocessing

In this study, efficient image preprocessing was critical to enhancing the overall system accuracy [17,18]. The primary goal of this stage was to accurately extract the cervical spine region from CSR images. To address the issue of grayscale non-uniformity commonly observed in X-ray imaging, various preprocessing methods were applied to improve the contrast between the vertebrae and the background. This step included image standardization, noise reduction, contrast enhancement, and image binarization. Together, these processes improved the visibility of cervical spine contours and supported more reliable downstream analysis.
To ensure data consistency and compatibility with the deep learning model, all X-ray images were resized to 512 × 512 pixels. In addition, this study applied a median filter for noise reduction, which was particularly effective against salt-and-pepper noise commonly found in medical imaging [19,20,21]. The median filter was a widely used image processing technique that smoothed the image while preserving important edge details, making it well-suited for denoising cervical spine images without compromising anatomical structures. Unlike linear filters, each pixel was updated with the median value derived from its neighboring pixels when using a median filter, thereby avoiding excessive blurring and maintaining the clarity of vertebral boundaries. This process effectively preserved the fine details of cervical joints and their edges, which were critical for accurate localization. The mathematical formulation of the median filter was presented in Equation (1), and its denoising effect was shown in Figure 3b.
g x ,   y = median f i ,   j i , j     S x ,   y
To further enhance the visibility of cervical vertebrae structures, this study integrated two contrast enhancement methods: Histogram Equalization (HE) and Contrast-Limited Adaptive Histogram Equalization (CLAHE) [22,23]. HE improved global contrast by redistributing grayscale values across the entire image histogram, while CLAHE enhanced local contrast adaptively within small regions and prevented over-amplification of noise by limiting contrast enhancement. The combination of these methods effectively improved both global and local contrast, resulting in clearer delineation of vertebral boundaries and anatomical features. The enhancement results were shown in Figure 3c.
Following contrast enhancement, adaptive thresholding was applied to detect variations in pixel intensity, which supported cervical spine localization and ensured stable separation between the cervical spine and the background under varying exposure conditions. To further refine this separation, Otsu’s thresholding method [24] was employed to isolate the cervical spine from the background. This method, particularly effective for images with bimodal intensity distributions, calculated the optimal threshold by maximizing inter-class variance, as described in Equation (2). This step significantly improved segmentation accuracy and the visibility of cervical vertebrae, as shown in Figure 3d.
σ 2 ω = ω 0 σ 0 2 + ω 1 σ 1 2

2.2. Cervical Spine Localization

Following the image preprocessing step, the algorithm processed the binarized images by scanning each row to identify the one with the fewest white pixels, typically corresponding to the area of lowest pixel density. This row served as a key reference for determining the lateral boundaries of the cervical spine. To locate the central X-coordinate, the study detected the first and last transition points from black to white along the identified row. It then expanded leftward and rightward until the pixel values returned to black, thereby defining the full width of the cervical spine. To ensure complete coverage and avoid loss of anatomical information, a padding of 50 pixels was added to both sides. A similar approach was applied along the Y-axis: vertical transitions were analyzed at the leftmost and rightmost X-boundaries to determine the superior and inferior edges of the spine. Additional padding was added to the upper and lower margins to retain essential contextual information. This cropping step eliminated irrelevant information, ensured consistent extraction across all images, and enhanced the reliability of subsequent vertebrae recognition. The results of the cervical spine localization were shown in Figure 4.

2.3. Vertebra and Neural Foramen Localization by Yolov8s

To ensure clinical applicability, this study was conducted in collaboration with two board-certified neurosurgeons, each with over three years of clinical experience. The study was approved by the Institutional Review Board (IRB) under approval number 202401261B0. The dataset comprised 200 cervical spine X-ray images collected from Chang Gung Memorial Hospital during the study period, encompassing vertebrae from C2 to C7. All images were clinical studies from adults aged eighteen years or older, and the ratio of males to females was approximately three to one.
At this stage, a vertebra localization model was developed to accurately extract the region of interest (ROI) for subsequent analysis [25,26,27]. This study adopted YOLO, a deep learning-based object detection framework, to enable accurate and efficient real-time localization of cervical vertebrae [28]. Traditional cervical spine identification methods typically involved multiple image processing steps, rendering them unsuitable for real-time applications. In contrast, YOLO performed both object localization and classification simultaneously within a single inference pass, making it particularly well-suited for remote healthcare settings and clinical environments where rapid diagnostic support was essential [29,30].
After comprehensive evaluation, YOLOv8s was selected as the vertebra localization model in this study. Table 1 summarizes the hardware and software platforms used for training the deep learning model in this work. Additionally, a total of 200 CSR images were used to train the YOLOv8s model in this study. To ensure representative sampling and reduce selection bias samples, the dataset randomly split into 160 training, 20 validation, and 20 testing samples, as summarized in Table 2.

2.4. Automated Measurement Distance

This study proposed an automated system for measuring intervertebral distances, with standardized outputs serving as valuable references for clinical diagnosis. To ensure accurate and reliable distance computation, the system integrated image enhancement, edge detection, and coordinated transformation method.
To ensure high image quality and preserve the clarity and integrity of vertebral structures, multiple image enhancement techniques were applied, particularly because raw X-ray images often contained irrelevant regions such as soft tissues or imaging artifacts. First, contrast stretching was used to enhance fine details and improve the visibility of vertebral structures under low-contrast conditions. Next, a Gaussian blur filter was applied to smooth the image and reduce edge artifacts that could interfere with further analysis. The image was then binarized, effectively separating the vertebral region from the background and preserving the continuity of the skeletal structures. To further emphasize vertebral boundaries and eliminate small-scale noise, morphological operations were employed. Finally, to ensure that only the vertebral region was retained, the largest connected component was identified and extracted, removing residual non-target regions.
After image enhancement, the study localized central points separately along the vertebral boundary and along the lateral boundary of the neural foramen. Using the vertebra in Figure 5b as an example, within the largest white contour the bottom-left and bottom-right extreme points were first identified, shown as green dots in Figure 5b. These two points were then used to compute the center along the x-axis. The intersection of this x-axis with the lower white boundary was taken as the lower reference point of the vertebra, shown as a red dot in Figure 5b. Applying the same procedure to the upper boundary yielded the upper reference point of the vertebra, indicated by a blue dot in Figure 5b.
Following central-point localization along the vertebral boundary and the lateral border of the neural foramen, the Euclidean distance formula was applied to compute the geometric distance between the two points, as shown in Equation (3). A schematic diagram illustrating the distance calculation was presented in Figure 6.
d = a 2 a 1 2 b 2 b 1 2

3. Results

For clarity and completeness, the performance evaluation was discussed in three distinct parts: cervical spine cropping, the module for vertebra localization, and intervertebral distance measurement. Performance evaluation was conducted using Accuracy, Precision, Recall, and Mean Average Precision (mAP) metrics to ensure objectivity and consistency, with their definitions provided in Equations (4)–(8). In this context, true positive ( T p ) and true negative ( T n ) represented correctly predicted positive and negative samples, respectively, while false positive ( F p ) and false negative ( F n ) denoted incorrectly predicted positive and negative samples.
A c c u r a c y = T p + T n T p + F p + T n + F n
P r e c i s i o n = T p T p + F p
R e c a l l = T p T p + F n
A P = 0 1 P r e c i s i o n Recall d Recall
m A P = 1 n i = 0 n A P i

3.1. The Performance for Cervical Spine Localization

Compared to using the original input images, the accuracy of vertebral localization was greatly increased by applying cropping based on the cervical spine algorithm. As demonstrated in Table 3, the cropped images successfully eliminated extra background and noise, improving the model’s capacity to identify cervical spine structures. By increasing accuracy from 93.30% to 99.50%, this method highlighted the significance of cervical spine localization techniques in enhancing model precision, especially for difficult vertebrae like C7. Notably, the localization accuracy for the challenging C7 vertebra increased from 87.00% to 100.00%, marking a substantial and impressive improvement.

3.2. The Performance for Model Analysis

Through K-fold cross-validation, the proposed system achieved an average accuracy of 98.00% in vertebra localization, with recall and mAP50 reaching 97.46% and 98.60%, respectively, as shown in Table 4. In neural foramen detection, the model also attained excellent performance, achieving precision, recall, and mAP50 values all exceeding 95.50%, as presented in Table 5. These results confirmed the high reliability and detection accuracy of the model, further validating the system’s stability and generalization capability across different data subsets.
In addition, the vertebra localization performance of the proposed method was compared with existing approaches, as summarized in Table 6. The results indicated that the proposed system significantly outperformed both traditional methods [31,32] and more recent studies [33,34] in terms of localization accuracy. This advantage was particularly evident in challenging cases such as the C7 vertebra. Unlike previous methods that often struggled with complex vertebral structures, the proposed approach achieved 100.00% localization accuracy for C7, representing a notable improvement of at least 10.74% over existing methods. Overall, the proposed system maintained an average localization accuracy of 99.50% across all vertebrae, confirming its stable performance and high precision in both routine and complex localization scenarios.
Table 6. Comparative analysis of vertebra localization accuracy across different studies.
Table 6. Comparative analysis of vertebra localization accuracy across different studies.
Method in [31]Method in [32]Method in [33]Method in [34]This Work
Over All93.76%89.00%64.5091.6399.50%
C2N/AN/A77.5091.7099.30%
C396.74%95.00%33.3392.2099.30%
C496.65%97.50%63.3392.3099.40%
C595.51%95.00%63.3391.6099.70%
C695.33%97.50%85.0091.7099.60%
C784.55%60.00%N/A90.30100.00%
An example of the output generated by the proposed system was shown in Figure 7. It showed the automated localization and labeling of vertebrae and neural foramina using the developed system. In the result, each vertebra from C2 to C7 was labeled along with the corresponding confidence scores. For instance, the model assigned a confidence score of 0.91 to C2 and 0.87 to C7, providing clinicians with a visual reference and an indication of the model’s prediction certainty.

3.3. Measurement Distance Analysis

In the measurement distance analysis section, the distances computed by the proposed system were compared with those manually annotated by doctors. As shown in Figure 8 and Figure 9, a strong visual similarity in linear trends was observed between the two sets of measurements. To objectively validate this trend, five data cases were tested, and the Pearson product-moment correlation coefficient (PPMCC) was calculated. The results, presented in Table 7, showed a high degree of correlation, exceeding 90% for both intervertebral distances and vertebra-to-neural foramen distances. Notably, the correlation for intervertebral distance measurement reached as high as 97.5%, confirming the high reliability and accuracy of the proposed system’s distance computation.

4. Discussion

This study proposed an efficient and highly accurate system for the automatic detection and localization of vertebrae and neural structures in CSR images based on the YOLOv8s model. The system successfully identified and labeled the C2 to C7 vertebrae and corresponding neural structures. The proposed method achieved an overall localization accuracy of 99.50%, markedly outperforming prior approaches. Notably, this study achieved a major breakthrough in the localization of the typically challenging C7 vertebra, attaining 100.00% accuracy.
Compared with existing methods, the approaches in [31,32] relied on preprocessing to enhance vertebral contours and then performed shape matching based on the template, while [33] used YOLOv3 and [34] combined U-Net with Mask R-CNN for vertebra localization without image preprocessing. In contrast, the study integrated image preprocessing with targeted cervical region extraction to amplify vertebral and foraminal features and to remove non-target areas before model training, thereby mitigating background interference. The results substantiated that this strategy localized C2–C7 more precisely and efficiently than prior methods.
In addition to providing accurate localization of vertebrae and neural structures, the system also incorporated automated measurement of intervertebral distances and distances between vertebrae and adjacent neural foramina. The measured distances showed correlations above 90% with neurosurgeon annotations, with the correlation of intervertebral distance reaching 97.5%, underscoring the high reliability of the system’s distance computation. These measurements facilitated longitudinal tracking of intervertebral and vertebra-to-neural foramen spacing served as valuable clinical indicators for assessing intervertebral disc degeneration, spinal stability, and potential nerve compression.
A current limitation is that pixel spacing was not considered, so the results were reported in pixels rather than physical units, precluding direct clinical length correspondence. Future work will calibrate pixel measurements to physical units (millimeter and centimeter), further optimize the model to improve accuracy, and extend the system to identify age-related spinal conditions such as osteophytes and intervertebral disc narrowing.

5. Conclusions

This study presented an automated cervical vertebra localization and distance measurement system capable of accurately detecting and identifying each vertebra in CSR images. Through image preprocessing, the structural features of each vertebra were significantly enhanced. By integrating image enhancement methods with the YOLOv8s model, the system achieved highly accurate identification of vertebral and neural positions. The recognition results were subsequently overlaid onto the CSR images, along with the computed intervertebral distances and the distances between each vertebra and the adjacent neural structures, thereby providing comprehensive visual information for clinical evaluation. This efficient and accurately automated system not only reduced manual annotation time and labor costs but also demonstrated strong potential for clinical diagnostics and telemedicine applications. The main contributions of this study are summarized as follows:
1.
Increase in cervical spine localization accuracy:
In the preprocessing step of this study, cervical spine localization was performed to extract the cervical spine region for subsequent image enhancement and recognition. This method effectively eliminated irrelevant background and noise. The experimental results confirmed that, compared to the baseline accuracy of 93.30 without preprocessing, the proposed method significantly improved the accuracy to 99.50.
2.
Highly accurate localization of C2 to C7 vertebrae:
By incorporating image preprocessing and enhancement methods, this study effectively accentuated the features of each vertebra, resulting in a substantial boost in localization performance. The proposed method achieved an outstanding overall vertebrae localization accuracy of 99.50, with even the anatomically challenging C2 and C7 vertebrae surpassing 99% accuracy. Notably, the accuracy for C7 localization improved by approximately 66.67% compared to existing methods, which only reached 60.00, marking a significant and noteworthy advancement in the field.
3.
Automated positioning, labeling, and measurement system:
The system proposed in this study was based on YOLOv8s and was capable of automatically detecting and localizing vertebrae and neural structures, as well as measuring the intervertebral distances and the distances between vertebrae and neural structures. These measurements provided critical data for assessing spinal stability, intervertebral disc degeneration, and nerve compression. The system ensured consistent and accurate localization and measurement, while significantly reducing the need for manual annotation and data processing time.
In addition, this study introduced a user-friendly interface designed for healthcare professionals to facilitate the intuitive and practical application of the system in clinical settings. It was expected that this work would contribute to cervical spine healthcare by providing clinicians with an auxiliary tool to enhance workflow efficiency and improve patient care, ultimately benefiting both medical personnel and patients.

Author Contributions

Conceptualization, T.-K.C. and S.-T.L.; methodology, Y.-Y.H., T.-K.C. and T.-Y.C.; software, C.-S.L. and S.-T.L.; validation, C.-S.L. and S.-T.L.; formal analysis, H.-K.W. and S.-H.T.; investigation, C.-S.L.; resources, H.-K.W. and S.-H.T.; data curation, H.-K.W. and S.-H.T.; writing—original draft preparation, Y.-Y.H., K.-C.L. and P.A.R.A.; writing—review and editing, Y.-Y.H. and P.A.R.A.; visualization, K.-C.L. and W.-C.T.; supervision, T.-Y.C. and W.-C.T.; project administration, K.-C.L. and W.-C.T.; funding acquisition, T.-K.C. and T.-Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported, in part, by the National Science and Technology Council, Taiwan, under grant numbers 112-2410-H-197-002-MY2, 113-2314-B-182A-140, 113-2221-E-131-026, 114-2221-E-035-032, 114-2221-E-131-009, and 114-2314-B-182A-051. And This work was supported by the Research Support of the Feng Chia University Research Program, grant no. 24H00810.

Institutional Review Board Statement

Chang Gung Medical Foundation Institutional Review Board; IRB number: 202401261B0; Date of Approval: 1 September 2024; Protocol Title: Using Artificial Intelligence Image Analysis in Cervical Disease; Executing Institution: Chang Gung Medical Foundation Linkou Chang Gung Memorial Hospital; Duration of Approval: From 1 September 2024 to 31 August 2025. The Research Institution Review Board (IRB) reviewed and determined that it is expedited review according to case research or cases treated or diagnosed by clinical routines. However, this does not include HIV-positive cases.

Informed Consent Statement

The Chang Gung Medical Foundation Institutional Review Board approves the waiver of the participants’ consent. The research does not adversely affect the rights and welfare of the subjects. The study uses de-identified or non-traceable data, records, documents, information, or specimens obtained from a legally established biological database, ensuring that individual identities cannot be identified.

Data Availability Statement

The datasets presented in this article are not readily available because they are part of an ongoing study and will be made available only after the completion of data collection and analysis. Requests to access the datasets should be directed to the corresponding authors at simonchi@mail.mcut.edu.tw or tsungychen@fcu.edu.tw.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tseng, W.C.; Liao, C.Y.; Chassagne, L.; Cagneau, B. An ink-insensitive deep learning model for improving the printing quality in extrusion-based bioprinting. Int. J. Bioprint. 2025, 11, 599–614. [Google Scholar] [CrossRef]
  2. Jose, R.; Thomas, A.; Guo, J.; Steinberg, R.; Toma, M. Evaluating machine learning models for prediction of coronary artery disease. Glob. Transl. Med. 2024, 3, 2669. [Google Scholar] [CrossRef]
  3. Zhang, F.; Zheng, L.; Chen, Y.; Lin, C.; Huang, L.; Bai, Y.; Luo, X. Fully Automatic Cervical Vertebrae Segmentation Via Enhanced U2-Net. In Proceedings of the 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8–11 October 2023; pp. 2900–2904. [Google Scholar] [CrossRef]
  4. Cina, A.; Bassani, T.; Panico, M.; Luca, A.; Masharawi, Y.; Brayda-Bruno, M.; Galbusera, F. 2-step deep learning model for landmarks localization in spine radiographs. Sci. Rep. 2021, 11, 9482. [Google Scholar] [CrossRef] [PubMed]
  5. Hu, X.; Kenan, S.; Cheng, M.; Cai, W.; Huang, W.; Yan, W. 3D-Printed Patient-Customized Artificial Vertebral Body for Spinal Reconstruction after Total En Bloc Spondylectomy of Complex Multi-Level Spinal Tumors. Int. J. Bioprint. 2022, 8, 576. [Google Scholar] [CrossRef] [PubMed]
  6. Chang, C.Y.; Hsieh, M.H.; Hsu, S.M. Localization of Fresh and Old Fracture in Spine CT Images Using YOLOR. In Proceedings of the 2022 IEEE International Conference on Consumer Electronics—Taiwan, Taipei, Taiwan, 6–8 July 2022; pp. 253–254. [Google Scholar] [CrossRef]
  7. Pham, D.L.; Xu, C.; Prince, J.L. Current Methods in Medical Image Segmentation1. Annu. Rev. Biomed. Eng. 2000, 2, 315–337. [Google Scholar] [CrossRef]
  8. Panayides, A.S.; Amini, A.; Filipovic, N.D.; Sharma, A.; Tsaftaris, S.A.; Young, A.; Foran, D.; Do, N.; Golemati, S.; Kurc, T.; et al. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J. Biomed. Health Inform. 2020, 24, 1837–1857. [Google Scholar] [CrossRef] [PubMed]
  9. Huang, C.H. A fast method for spine localization in x-ray images. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5091–5094. [Google Scholar] [CrossRef]
  10. Long, L.R.; Thoma, G.R. Use of shape models to search digitized spine X-rays. In Proceedings of the 13th IEEE Symposium on Computer-Based Medical Systems (CBMS 2000), Houston, TX, USA, 24 June 2000; pp. 255–260. [Google Scholar] [CrossRef]
  11. Shemesh, S.; Kimchi, G.; Yaniv, G.; Harel, R. MRI-based detection of cervical ossification of the posterior longitudinal ligament using a novel automated machine learning diagnostic tool. Neurosurg. Focus 2023, 54, E11. [Google Scholar] [CrossRef] [PubMed]
  12. Fujimori, T.; Le, H.; Hu, S.S.; Chin, C.; Pekmezci, M.; Schairer, W.; Tay, B.K.; Hamasaki, T.; Yoshikawa, H.; Iwasaki, M. Ossification of the posterior longitudinal ligament of the cervical spine in 3161 patients: A CT-based study. Spine 2015, 40, E394–E403. [Google Scholar] [CrossRef] [PubMed]
  13. Matsunaga, S.; Sakou, T. Ossification of the posterior longitudinal ligament of the cervical spine: Etiology and natural history. Spine 2012, 37, E309–E314. [Google Scholar] [CrossRef] [PubMed]
  14. Sun, B.; Xu, C.; Qi, M.; Shen, X.; Zhang, K.; Yuan, W.; Liu, Y. Predictive Effect of Intervertebral Foramen Width on Pain Relief After ACDF for the Treatment of Cervical Radiculopathy. Glob. Spine J. 2023, 13, 133–139. [Google Scholar] [CrossRef] [PubMed]
  15. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar] [CrossRef]
  16. Yaseen, M.; Ali, M.; Ali, S.; Hussain, A.; Athar, A.; Kim, H.C. Deep Learning Based Cervical Spine Bones Detection: A Case Study Using YOLO. In Proceedings of the 2024 26th International Conference on Advanced Communications Technology (ICACT), Pyeong Chang, Republic of Korea, 4–7 February 2024; pp. 1–5. [Google Scholar] [CrossRef]
  17. Saenpaen, J.; Arwatchananukul, S.; Aunsri, N. A Comparison of Image Enhancement Methods for Lumbar Spine X-ray Image. In Proceedings of the 2018 15th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Chiang Rai, Thailand, 18–21 July 2018; pp. 798–801. [Google Scholar] [CrossRef]
  18. Ikhsan, I.A.M.; Hussain, A.; Zulkifley, M.A.; Tahir NMd Mustapha, A. An analysis of x-ray image enhancement methods for vertebral bone segmentation. In Proceedings of the 2014 IEEE 10th International Colloquium on Signal Processing and Its Applications, Kuala Lumpur, Malaysia, 7–9 March 2014; pp. 208–211. [Google Scholar] [CrossRef]
  19. Malik, S.H.; Lone, T.A.; Quadri, S.M.K. Contrast enhancement and smoothing of CT images for diagnosis. In Proceedings of the 2015 2nd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 11–13 March 2015; pp. 2214–2219. Available online: https://ieeexplore.ieee.org/document/7100631 (accessed on 19 May 2024).
  20. Isnanto, R.R.; Windarto, Y.E.; Mangkuratmaja, M.V. Assessment on Image Quality Changes as a Result of Implementing Median Filtering, Wiener Filtering, Histogram Equalization, and Hybrid Methods on Noisy Images. In Proceedings of the 2020 7th International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE), Semarang, Indonesia, 24–25 September 2020; pp. 185–190. [Google Scholar] [CrossRef]
  21. Windyga, P.S. Fast impulsive noise removal. IEEE Trans. Image Process. 2001, 10, 173–179. [Google Scholar] [CrossRef]
  22. Serrano-Díaz, D.G.; Gómez, W.; Vera, A.; Leija, L. Contrast Enhancement of 3D X-ray Microtomography Using CLAHE for Trabecular Bone Segmentation. In Proceedings of the 2023 Global Medical Engineering Physics Exchanges/Pacific Health Care Engineering (GMEPE/PAHCE), Songdo, Republic of Korea, 27–31 March 2023; pp. 1–6. [Google Scholar] [CrossRef]
  23. Hummel, R.A. Histogram modification techniques. Comput. Graph. Image Process. 1975, 4, 209–224. [Google Scholar] [CrossRef]
  24. Badriyah, T.; Sakinah, N.; Syarif, I.; Syarif, D.R. Segmentation Stroke Objects based on CT Scan Image using Thresholding Method. In Proceedings of the 2019 First International Conference on Smart Technology & Urban Development (STUD), Chiang Mai, Thailand, 13–14 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  25. Benjelloun, M.; Mahmoudi, S. Spine Localization and Vertebral Mobility Analysis Using Faces Contours Detection. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 6557–6560. [Google Scholar] [CrossRef]
  26. Wang, Q.P.; Wang, T.P.; Zhang, K. Image edge detection based on the grey prediction model and discrete wavelet transform. In Proceedings of the 2011 IEEE International Conference on Grey Systems and Intelligent Services, Nanjing, China, 15–18 September 2011; pp. 617–621. [Google Scholar] [CrossRef]
  27. Qin, C.; Zhou, J.; Yao, D.; Zhuang, H.; Wang, H.; Chen, S.; Shi, Y.; Song, Z. Vertebrae Labeling via End-to-End Integral Regression Localization and Multi-Label Classification Network. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 2726–2736. [Google Scholar] [CrossRef] [PubMed]
  28. Wang, F.; Zheng, K.; Lu, L.; Xiao, J.; Wu, M.; Miao, S. Automatic Vertebra Localization and Identification in CT by Spine Rectification and Anatomically-constrained Optimization. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 5276–5284. [Google Scholar] [CrossRef]
  29. Sutradhar, D.; Fahad, N.M.; Khan Raiaan, M.A.; Jonkman, M.; Azam, S. Cervical spine fracture detection utilizing YOLOv8 and deep attention-based vertebrae classification ensuring XAI. Biomed. Signal Process. Control 2025, 101, 107228. [Google Scholar] [CrossRef]
  30. Yaseen, M.; Ali, M.; Ali, S.; Hussain, A.; Joo, M.I.; Kim, H.C. Cervical Spine Fracture Detection and Classification Using Two-Stage Deep Learning Methodology. IEEE Access 2024, 12, 72131–72142. [Google Scholar] [CrossRef]
  31. Mehmood, A.; Akram, M.U.; Tariq, A. Vertebra localization and centroid detection from cervical radiographs. In Proceedings of the 2017 International Conference on Communication, Computing and Digital Systems (C-CODE), Islamabad, Pakistan, 8–9 March 2017; pp. 287–292. [Google Scholar] [CrossRef]
  32. Larhmam, M.A.; Mahmoudi, S.; Benjelloun, M. Semi-automatic detection of cervical vertebrae in X-ray images using generalized hough transform. In Proceedings of the 2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA), Istanbul, Turkey, 15–18 October 2012; pp. 396–401. [Google Scholar] [CrossRef]
  33. Jiang, F.; Abdulqader, A.A.; Yan, Y.; Cheng, F.; Xiang, T.; Yu, J.; Li, J.; Qiu, Y.; Chen, X. Deep learning based quantitative cervical vertebral maturation analysis. Head Face Med. 2025, 21, 20. [Google Scholar] [CrossRef] [PubMed]
  34. Chen, Y.; Mo, Y.; Readie, A.; Ligozio, G.; Mandal, I.; Jabbar, F.; Coroller, T.; Papież, B.W. VertXNet: An ensemble method for vertebral body segmentation and identification from cervical and lumbar spinal X-rays. Sci. Rep. 2024, 14, 3341. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The cervical spine radiography with cervical vertebrae labels.
Figure 1. The cervical spine radiography with cervical vertebrae labels.
Diagnostics 15 02162 g001
Figure 2. Flow chart of this study.
Figure 2. Flow chart of this study.
Diagnostics 15 02162 g002
Figure 3. The result for the cervical spine radiography by different preprocessing. (a) Original image. (b) The median filtered image. (c) The image after contrast enhancement. (d) Binarized image.
Figure 3. The result for the cervical spine radiography by different preprocessing. (a) Original image. (b) The median filtered image. (c) The image after contrast enhancement. (d) Binarized image.
Diagnostics 15 02162 g003
Figure 4. The result of the X-ray cervical spine localization. (a) Original cervical spine radiography. (b) The image after algorithm processing.
Figure 4. The result of the X-ray cervical spine localization. (a) Original cervical spine radiography. (b) The image after algorithm processing.
Diagnostics 15 02162 g004
Figure 5. Vertebral image enhancement and central point localization. (a) Vertebrae after YOLO-based cropping. (b) Image after preprocessing and central point localization.
Figure 5. Vertebral image enhancement and central point localization. (a) Vertebrae after YOLO-based cropping. (b) Image after preprocessing and central point localization.
Diagnostics 15 02162 g005
Figure 6. The sample of distance calculation. (a) The intervertebral distance. (b) The distance between a vertebra and the neural foramen.
Figure 6. The sample of distance calculation. (a) The intervertebral distance. (b) The distance between a vertebra and the neural foramen.
Diagnostics 15 02162 g006
Figure 7. The example of vertebral and neural foraminal localization in this study.
Figure 7. The example of vertebral and neural foraminal localization in this study.
Diagnostics 15 02162 g007
Figure 8. The result of distance measurement for each vertebral and neural foraminal. (a) This study. (b) Doctor.
Figure 8. The result of distance measurement for each vertebral and neural foraminal. (a) This study. (b) Doctor.
Diagnostics 15 02162 g008
Figure 9. The result of distance measurement for each vertebra. (a) This study. (b) Doctor.
Figure 9. The result of distance measurement for each vertebra. (a) This study. (b) Doctor.
Diagnostics 15 02162 g009
Table 1. The hardware and software platforms in this study.
Table 1. The hardware and software platforms in this study.
Hardware PlatformVersion
CPU11th Gen Intel(R) Core (TM) i7-100H
GPUGeForce RTX 3070 Laptop GPU 8GB
DRAM32GB DDR4 3200MHz
Software platformVersion
Operating SystemWindows 11 Home 64-bit
Python IDEPyCharm 2024.1
Table 2. The training data of Yolov8s.
Table 2. The training data of Yolov8s.
TrainValidationTest
CSR1602020
Table 3. Impact of cervical spine localization algorithm on vertebrae detection accuracy.
Table 3. Impact of cervical spine localization algorithm on vertebrae detection accuracy.
Without Cervical Spinal LocalizationThis Work
Over All93.30%99.50%
C297.10%99.30%
C391.40%99.30%
C489.40%99.40%
C5100.00%99.70%
C695.00%99.60%
C787.00%100.00%
Table 4. The K-Fold validation results for vertebra localization.
Table 4. The K-Fold validation results for vertebra localization.
FoldAccuracy (%)Recall (%)mAP50 (%)mAP50–95 (%)
Fold199.1097.7099.3080.80
Fold297.2096.4098.8081.10
Fold397.7096.5098.1080.10
Fold496.4097.1097.3080.30
Fold599.6099.6099.5081.90
Average98.0097.4698.6080.84
Table 5. The result of detection neural foraminal.
Table 5. The result of detection neural foraminal.
ClassPrecision (%)Recall (%)mAP50 (%)
Neural Foraminal97.3595.7097.70
Table 7. The PPMCCs of intervertebral distances and vertebra-to-neural foramen distances.
Table 7. The PPMCCs of intervertebral distances and vertebra-to-neural foramen distances.
CasePPMCC
Vertebra-to-Neural Foramen
Distances
Intervertebral Distances
10.9960.937
20.8970.998
30.7480.994
40.8930.952
50.9880.994
Average0.9040.975
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Y.-Y.; Wang, H.-K.; Chi, T.-K.; Liu, C.-S.; Tsai, S.-H.; Liong, S.-T.; Chen, T.-Y.; Li, K.-C.; Tu, W.-C.; Abu, P.A.R. Deep Learning for Cervical Spine Radiography: Automated Measurement of Intervertebral and Neural Foraminal Distances. Diagnostics 2025, 15, 2162. https://doi.org/10.3390/diagnostics15172162

AMA Style

Huang Y-Y, Wang H-K, Chi T-K, Liu C-S, Tsai S-H, Liong S-T, Chen T-Y, Li K-C, Tu W-C, Abu PAR. Deep Learning for Cervical Spine Radiography: Automated Measurement of Intervertebral and Neural Foraminal Distances. Diagnostics. 2025; 15(17):2162. https://doi.org/10.3390/diagnostics15172162

Chicago/Turabian Style

Huang, Ya-Yun, Hong-Kai Wang, Tsun-Kuang Chi, Chao-Shin Liu, Sung-Hsin Tsai, Sze-Teng Liong, Tsung-Yi Chen, Kuo-Chen Li, Wei-Chen Tu, and Patricia Angela R. Abu. 2025. "Deep Learning for Cervical Spine Radiography: Automated Measurement of Intervertebral and Neural Foraminal Distances" Diagnostics 15, no. 17: 2162. https://doi.org/10.3390/diagnostics15172162

APA Style

Huang, Y.-Y., Wang, H.-K., Chi, T.-K., Liu, C.-S., Tsai, S.-H., Liong, S.-T., Chen, T.-Y., Li, K.-C., Tu, W.-C., & Abu, P. A. R. (2025). Deep Learning for Cervical Spine Radiography: Automated Measurement of Intervertebral and Neural Foraminal Distances. Diagnostics, 15(17), 2162. https://doi.org/10.3390/diagnostics15172162

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop