Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (28)

Search Parameters:
Keywords = LumbarNet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3456 KiB  
Article
Precision in 3D: A Fast and Accurate Algorithm for Reproducible Motoneuron Structure and Protein Expression Analysis
by Morgan Highlander, Shelby Ward, Bradley LeHoty, Teresa Garrett and Sherif Elbasiouny
Bioengineering 2025, 12(7), 761; https://doi.org/10.3390/bioengineering12070761 - 14 Jul 2025
Viewed by 242
Abstract
Structural analysis of motoneuron somas and their associated proteins via immunohistochemistry (IHC) remains tedious and subjective, requiring costly software or adapted 2D manual methods that lack reproducibility and analytical rigor. Yet, neurodegenerative disease and aging research demands precise structural comparisons to elucidate mechanisms [...] Read more.
Structural analysis of motoneuron somas and their associated proteins via immunohistochemistry (IHC) remains tedious and subjective, requiring costly software or adapted 2D manual methods that lack reproducibility and analytical rigor. Yet, neurodegenerative disease and aging research demands precise structural comparisons to elucidate mechanisms driving neuronal degeneration. To address this need, we developed a novel algorithm that automates repetitive and subjective IHC analysis tasks, enabling thorough, objective, blinded, order-agnostic, and reproducible 3D batch analysis. With no manual tracing, the algorithm produces 3D Cartesian reconstructions of motoneuron somas from 60× IHC images of mouse lumbar spinal tissue. From these reconstructions, it measures 3D soma volume and efficiently quantitates net somatic protein expression and macro-cluster size. In this validation study, we applied the algorithm to assess soma size and C-bouton expression in various healthy control mice, comparing its measurements against manual measurements and across multiple algorithm users to confirm its accuracy and reproducibility. This novel, customizable tool enables efficient and high-fidelity 3D motoneuron analysis, replacing tedious, qualitative, cell-by-cell manual tuning with automatic threshold adaptation and quantified batch settings. For the first time, we attain reproducible results with quantifiable accuracy, exhaustive sampling, and a high degree of objectivity. Full article
(This article belongs to the Special Issue Data Modeling and Algorithms in Biomedical Applications)
Show Figures

Figure 1

17 pages, 3455 KiB  
Article
Segment Anything Model (SAM) and Medical SAM (MedSAM) for Lumbar Spine MRI
by Christian Chang, Hudson Law, Connor Poon, Sydney Yen, Kaustubh Lall, Armin Jamshidi, Vadim Malis, Dosik Hwang and Won C. Bae
Sensors 2025, 25(12), 3596; https://doi.org/10.3390/s25123596 - 7 Jun 2025
Viewed by 941
Abstract
Lumbar spine Magnetic Resonance Imaging (MRI) is commonly used for intervertebral disc (IVD) and vertebral body (VB) evaluation during low back pain. Segmentation of these tissues can provide useful quantitative information such as shape and volume. The objective of the study was to [...] Read more.
Lumbar spine Magnetic Resonance Imaging (MRI) is commonly used for intervertebral disc (IVD) and vertebral body (VB) evaluation during low back pain. Segmentation of these tissues can provide useful quantitative information such as shape and volume. The objective of the study was to determine the performances of Segment Anything Model (SAM) and medical SAM (MedSAM), two “zero-shot” deep learning models, in segmenting lumbar IVD and VB from MRI images and compare against the nnU-Net model. This cadaveric study used 82 donor spines. Manual segmentation was performed to serve as ground truth. Two readers processed the spine MRI using SAM and MedSAM by placing points or drawing bounding boxes around regions of interest (ROI). The outputs were compared against ground truths to determine Dice score, sensitivity, and specificity. Qualitatively, results varied but overall, MedSAM produced more consistent results than SAM, but neither matched the performance of nnU-Net. Mean Dice scores for MedSAM were 0.79 for IVDs and 0.88 for VBs, and significantly higher (each p < 0.001) than those for SAM (0.64 for IVDs, 0.83 for VBs). Both were lower compared to nnU-Net (0.99 for IVD and VB). Sensitivity values also favored MedSAM. These results demonstrated the feasibility of “zero-shot” DL models to segment lumbar spine MRI. While performance falls short of recent models, these zero-shot models offer key advantages in not needing training data and faster adaptation to other anatomies and tasks. Validation of a generalizable segmentation model for lumbar spine MRI can lead to more precise diagnostics, follow-up, and enhanced back pain research, with potential cost savings from automated analyses while supporting the broader use of AI and machine learning in healthcare. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 3238 KiB  
Article
Enhanced Disc Herniation Classification Using Grey Wolf Optimization Based on Hybrid Feature Extraction and Deep Learning Methods
by Yasemin Sarı and Nesrin Aydın Atasoy
Tomography 2025, 11(1), 1; https://doi.org/10.3390/tomography11010001 - 26 Dec 2024
Viewed by 1233
Abstract
Due to the increasing number of people working at computers in professional settings, the incidence of lumbar disc herniation is increasing. Background/Objectives: The early diagnosis and treatment of lumbar disc herniation is much more likely to yield favorable results, allowing the hernia to [...] Read more.
Due to the increasing number of people working at computers in professional settings, the incidence of lumbar disc herniation is increasing. Background/Objectives: The early diagnosis and treatment of lumbar disc herniation is much more likely to yield favorable results, allowing the hernia to be treated before it develops further. The aim of this study was to classify lumbar disc herniations in a computer-aided, fully automated manner using magnetic resonance images (MRIs). Methods: This study presents a hybrid method integrating residual network (ResNet50), grey wolf optimization (GWO), and machine learning classifiers such as multi-layer perceptron (MLP) and support vector machine (SVM) to improve classification performance. The proposed approach begins with feature extraction using ResNet50, a deep convolutional neural network known for its robust feature representation capabilities. ResNet50’s residual connections allow for effective training and high-quality feature extraction from input images. Following feature extraction, the GWO algorithm, inspired by the social hierarchy and hunting behavior of grey wolves, is employed to optimize the feature set by selecting the most relevant features. Finally, the optimized feature set is fed into machine learning classifiers (MLP and SVM) for classification. The use of various activation functions (e.g., ReLU, identity, logistic, and tanh) in MLP and various kernel functions (e.g., linear, rbf, sigmoid, and polynomial) in SVM allows for a thorough evaluation of the classifiers’ performance. Results: The proposed methodology demonstrates significant improvements in metrics such as accuracy, precision, recall, and F1 score, outperforming traditional approaches in several cases. These results highlight the effectiveness of combining deep learning-based feature extraction with optimization and machine learning classifiers. Conclusions: Compared to other methods, such as capsule networks (CapsNet), EfficientNetB6, and DenseNet169, the proposed ResNet50-GWO-SVM approach achieved superior performance across all metrics, including accuracy, precision, recall, and F1 score, demonstrating its robustness and effectiveness in classification tasks. Full article
Show Figures

Figure 1

14 pages, 6781 KiB  
Article
Identification of Vertebrae in CT Scans for Improved Clinical Outcomes Using Advanced Image Segmentation
by Sushmitha, M. Kanthi, Vishnumurthy Kedlaya K, Tejasvi Parupudi, Shyamasunder N. Bhat and Subramanya G. Nayak
Signals 2024, 5(4), 869-882; https://doi.org/10.3390/signals5040047 - 16 Dec 2024
Viewed by 1464
Abstract
This study proposes a comprehensive framework for the segmentation and identification of vertebrae in CT scans using a combination of deep learning and traditional machine learning techniques. The Res U-Net architecture is employed to achieve a high model accuracy of 93.62% on the [...] Read more.
This study proposes a comprehensive framework for the segmentation and identification of vertebrae in CT scans using a combination of deep learning and traditional machine learning techniques. The Res U-Net architecture is employed to achieve a high model accuracy of 93.62% on the VerSe’20 dataset demonstrating effective performance in segmenting lumbar and thoracic vertebrae. Feature extraction is enhanced through the application of Otsu’s method which effectively distinguishes the vertebrae from the surrounding tissue. The proposed method achieves a Dice Similarity Coefficient (DSC) of 87.10% ± 3.72%, showcasing its competitive performance against other segmentation techniques. By accurately extracting vertebral features this framework assists medical professionals in precise preoperative planning, allowing for the identification and marking of critical anatomical features required during spinal fusion procedures. This integrated approach not only addresses the challenges of vertebrae segmentation but also offers a scalable and efficient solution for analyzing large-scale medical imaging datasets with the potential to significantly improve clinical workflows and patient outcomes. Full article
Show Figures

Figure 1

15 pages, 5620 KiB  
Article
Automated Vertebral Bone Quality Determination from T1-Weighted Lumbar Spine MRI Data Using a Hybrid Convolutional Neural Network–Transformer Neural Network
by Kristian Stojšić, Dina Miletić Rigo and Slaven Jurković
Appl. Sci. 2024, 14(22), 10343; https://doi.org/10.3390/app142210343 - 11 Nov 2024
Cited by 2 | Viewed by 1493
Abstract
Vertebral bone quality (VBQ) is a promising new method that can improve screening for osteoporosis. The drawback of the current method is that it requires manual determination of the regions of interest (ROIs) of vertebrae and cerebrospinal fluid (CSF) by a radiologist. In [...] Read more.
Vertebral bone quality (VBQ) is a promising new method that can improve screening for osteoporosis. The drawback of the current method is that it requires manual determination of the regions of interest (ROIs) of vertebrae and cerebrospinal fluid (CSF) by a radiologist. In this work, an automatic method for determining the VBQ is proposed, in which the ROIs are obtained using a trained neural network model. A large, publicly available dataset of sagittal lumbar spine MRI images with ground truth segmentations was used to train a BRAU-Net++ hybrid CNN–transformer neural network. The performance of the trained model was evaluated using the dice similarity coefficient (DSC), accuracy, precision, recall and intersection-over-union (IoU) metrics. The trained model performed similarly to state-of-the-art lumbar spine segmentation models, with an average DSC value of 0.914 ± 0.007 for the vertebrae and 0.902 for the spinal canal. Four different methods of VBQ determination with automatic segmentation are presented and compared with one-way ANOVA. These methods use different algorithms for CSF extraction from the segmentation of the spinal canal using T1- and T2-weighted image data and applying erosion to the vertebral ROI to avoid a sharp change in SI at the edge of the vertebral body. Full article
(This article belongs to the Special Issue Transformer Deep Learning Architectures: Advances and Applications)
Show Figures

Figure 1

15 pages, 9671 KiB  
Article
Development of a Method for Estimating the Angle of Lumbar Spine X-ray Images Using Deep Learning with Pseudo X-ray Images Generated from Computed Tomography
by Ryuma Moriya, Takaaki Yoshimura, Minghui Tang, Shota Ichikawa and Hiroyuki Sugimori
Appl. Sci. 2024, 14(9), 3794; https://doi.org/10.3390/app14093794 - 29 Apr 2024
Cited by 2 | Viewed by 1633
Abstract
Background and Objectives: In lumbar spine radiography, the oblique view is frequently utilized to assess the presence of spondylolysis and the morphology of facet joints. It is crucial to instantly determine whether the oblique angle is appropriate for the evaluation and the necessity [...] Read more.
Background and Objectives: In lumbar spine radiography, the oblique view is frequently utilized to assess the presence of spondylolysis and the morphology of facet joints. It is crucial to instantly determine whether the oblique angle is appropriate for the evaluation and the necessity of retakes after imaging. This study investigates the feasibility of using a convolutional neural network (CNN) to estimate the angle of lumbar oblique images. Since there are no existing lumbar oblique images with known angles, we aimed to generate synthetic lumbar X-ray images at arbitrary angles from computed tomography (CT) images and to estimate the angles of these images using a trained CNN. Methods: Synthetic lumbar spine X-ray images were created from CT images of 174 individuals by rotating the lumbar spine from 0° to 60° in 5° increments. A line connecting the center of the spinal canal and the spinous process was used as the baseline to define the shooting angle of the synthetic X-ray images based on how much they were tilted from the baseline. These images were divided into five subsets and trained using ResNet50, a CNN for image classification, implementing 5-fold cross-validation. The models were trained for angle estimation regression and image classification into 13 classes at 5° increments from 0° to 60°. For model evaluation, mean squared error (MSE), root mean squared error (RMSE), and the correlation coefficient (r) were calculated for regression analysis, and the area under the curve (AUC) was calculated for classification. Results: In the regression analysis for angles from 0° to 60°, the MSE was 14.833 degree2, the RMSE was 3.820 degrees, and r was 0.981. The average AUC for the 13-class classification was 0.953. Conclusion: The CNN developed in this study was able to estimate the angle of an lumbar oblique image with high accuracy, suggesting its usefulness. Full article
Show Figures

Figure 1

15 pages, 3001 KiB  
Article
Development of End-to-End Artificial Intelligence Models for Surgical Planning in Transforaminal Lumbar Interbody Fusion
by Anh Tuan Bui, Hieu Le, Tung Thanh Hoang, Giam Minh Trinh, Hao-Chiang Shao, Pei-I Tsai, Kuan-Jen Chen, Kevin Li-Chun Hsieh, E-Wen Huang, Ching-Chi Hsu, Mathew Mathew, Ching-Yu Lee, Po-Yao Wang, Tsung-Jen Huang and Meng-Huang Wu
Bioengineering 2024, 11(2), 164; https://doi.org/10.3390/bioengineering11020164 - 8 Feb 2024
Cited by 3 | Viewed by 2850
Abstract
Transforaminal lumbar interbody fusion (TLIF) is a commonly used technique for treating lumbar degenerative diseases. In this study, we developed a fully computer-supported pipeline to predict both the cage height and the degree of lumbar lordosis subtraction from the pelvic incidence (PI-LL) after [...] Read more.
Transforaminal lumbar interbody fusion (TLIF) is a commonly used technique for treating lumbar degenerative diseases. In this study, we developed a fully computer-supported pipeline to predict both the cage height and the degree of lumbar lordosis subtraction from the pelvic incidence (PI-LL) after TLIF surgery, utilizing preoperative X-ray images. The automated pipeline comprised two primary stages. First, the pretrained BiLuNet deep learning model was employed to extract essential features from X-ray images. Subsequently, five machine learning algorithms were trained using a five-fold cross-validation technique on a dataset of 311 patients to identify the optimal models to predict interbody cage height and postoperative PI-LL. LASSO regression and support vector regression demonstrated superior performance in predicting interbody cage height and postoperative PI-LL, respectively. For cage height prediction, the root mean square error (RMSE) was calculated as 1.01, and the model achieved the highest accuracy at a height of 12 mm, with exact prediction achieved in 54.43% (43/79) of cases. In most of the remaining cases, the prediction error of the model was within 1 mm. Additionally, the model demonstrated satisfactory performance in predicting PI-LL, with an RMSE of 5.19 and an accuracy of 0.81 for PI-LL stratification. In conclusion, our results indicate that machine learning models can reliably predict interbody cage height and postoperative PI-LL. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Figure 1

14 pages, 2303 KiB  
Article
Machine Learning Predicts Decompression Levels for Lumbar Spinal Stenosis Using Canal Radiomic Features from Computed Tomography Myelography
by Guoxin Fan, Dongdong Wang, Yufeng Li, Zhipeng Xu, Hong Wang, Huaqing Liu and Xiang Liao
Diagnostics 2024, 14(1), 53; https://doi.org/10.3390/diagnostics14010053 - 26 Dec 2023
Cited by 4 | Viewed by 2387
Abstract
Background: The accurate preoperative identification of decompression levels is crucial for the success of surgery in patients with multi-level lumbar spinal stenosis (LSS). The objective of this study was to develop machine learning (ML) classifiers that can predict decompression levels using computed tomography [...] Read more.
Background: The accurate preoperative identification of decompression levels is crucial for the success of surgery in patients with multi-level lumbar spinal stenosis (LSS). The objective of this study was to develop machine learning (ML) classifiers that can predict decompression levels using computed tomography myelography (CTM) data from LSS patients. Methods: A total of 1095 lumbar levels from 219 patients were included in this study. The bony spinal canal in CTM images was manually delineated, and radiomic features were extracted. The extracted data were randomly divided into training and testing datasets (8:2). Six feature selection methods combined with 12 ML algorithms were employed, resulting in a total of 72 ML classifiers. The main evaluation indicator for all classifiers was the area under the curve of the receiver operating characteristic (ROC-AUC), with the precision–recall AUC (PR-AUC) serving as the secondary indicator. The prediction outcome of ML classifiers was decompression level or not. Results: The embedding linear support vector (embeddingLSVC) was the optimal feature selection method. The feature importance analysis revealed the top 5 important features of the 15 radiomic predictors, which included 2 texture features, 2 first-order intensity features, and 1 shape feature. Except for shape features, these features might be eye-discernible but hardly quantified. The top two ML classifiers were embeddingLSVC combined with support vector machine (EmbeddingLSVC_SVM) and embeddingLSVC combined with gradient boosting (EmbeddingLSVC_GradientBoost). These classifiers achieved ROC-AUCs over 0.90 and PR-AUCs over 0.80 in independent testing among the 72 classifiers. Further comparisons indicated that EmbeddingLSVC_SVM appeared to be the optimal classifier, demonstrating superior discrimination ability, slight advantages in the Brier scores on the calibration curve, and Net benefits on the Decision Curve Analysis. Conclusions: ML successfully extracted valuable and interpretable radiomic features from the spinal canal using CTM images, and accurately predicted decompression levels for LSS patients. The EmbeddingLSVC_SVM classifier has the potential to assist surgical decision making in clinical practice, as it showed high discrimination, advantageous calibration, and competitive utility in selecting decompression levels in LSS patients using canal radiomic features from CTM. Full article
Show Figures

Figure 1

13 pages, 1758 KiB  
Article
Automatic Segmentation and Quantification of Abdominal Aortic Calcification in Lateral Lumbar Radiographs Based on Deep-Learning-Based Algorithms
by Kexin Wang, Xiaoying Wang, Zuqiang Xi, Jialun Li, Xiaodong Zhang and Rui Wang
Bioengineering 2023, 10(10), 1164; https://doi.org/10.3390/bioengineering10101164 - 5 Oct 2023
Cited by 2 | Viewed by 1959
Abstract
To investigate the performance of deep-learning-based algorithms for the automatic segmentation and quantification of abdominal aortic calcification (AAC) in lateral lumbar radiographs, we retrospectively collected 1359 consecutive lateral lumbar radiographs. The data were randomly divided into model development and hold-out test datasets. The [...] Read more.
To investigate the performance of deep-learning-based algorithms for the automatic segmentation and quantification of abdominal aortic calcification (AAC) in lateral lumbar radiographs, we retrospectively collected 1359 consecutive lateral lumbar radiographs. The data were randomly divided into model development and hold-out test datasets. The model development dataset was used to develop U-shaped fully convolutional network (U-Net) models to segment the landmarks of vertebrae T12–L5, the aorta, and anterior and posterior aortic calcifications. The AAC lengths were calculated, resulting in an automatic Kauppila score output. The vertebral levels, AAC scores, and AAC severity were obtained from clinical reports and analyzed by an experienced expert (reference standard) and the model. Compared with the reference standard, the U-Net model demonstrated a good performance in predicting the total AAC score in the hold-out test dataset, with a correlation coefficient of 0.97 (p <0.001). The overall accuracy for the AAC severity was 0.77 for the model and 0.74 for the clinical report. Additionally, the Kendall coefficient of concordance of the total AAC score prediction was 0.89 between the model-predicted score and the reference standard, and 0.88 between the structured clinical report and the reference standard. In conclusion, the U-Net-based deep learning approach demonstrated a relatively high model performance in automatically segmenting and quantifying ACC. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

11 pages, 2454 KiB  
Article
Automated Detection and Measurement of Dural Sack Cross-Sectional Area in Lumbar Spine MRI Using Deep Learning
by Babak Saravi, Alisia Zink, Sara Ülkümen, Sebastien Couillard-Despres, Jakob Wollborn, Gernot Lang and Frank Hassel
Bioengineering 2023, 10(9), 1072; https://doi.org/10.3390/bioengineering10091072 - 10 Sep 2023
Cited by 6 | Viewed by 2105
Abstract
Lumbar spine magnetic resonance imaging (MRI) is a critical diagnostic tool for the assessment of various spinal pathologies, including degenerative disc disease, spinal stenosis, and spondylolisthesis. The accurate identification and quantification of the dural sack cross-sectional area are essential for the evaluation of [...] Read more.
Lumbar spine magnetic resonance imaging (MRI) is a critical diagnostic tool for the assessment of various spinal pathologies, including degenerative disc disease, spinal stenosis, and spondylolisthesis. The accurate identification and quantification of the dural sack cross-sectional area are essential for the evaluation of these conditions. Current manual measurement methods are time-consuming and prone to inter-observer variability. Our study developed and validated deep learning models, specifically U-Net, Attention U-Net, and MultiResUNet, for the automated detection and measurement of the dural sack area in lumbar spine MRI, using a dataset of 515 patients with symptomatic back pain and externally validating the results based on 50 patient scans. The U-Net model achieved an accuracy of 0.9990 and 0.9987 on the initial and external validation datasets, respectively. The Attention U-Net model reported an accuracy of 0.9992 and 0.9989, while the MultiResUNet model displayed a remarkable accuracy of 0.9996 and 0.9995, respectively. All models showed promising precision, recall, and F1-score metrics, along with reduced mean absolute errors compared to the ground truth manual method. In conclusion, our study demonstrates the potential of these deep learning models for the automated detection and measurement of the dural sack cross-sectional area in lumbar spine MRI. The proposed models achieve high-performance metrics in both the initial and external validation datasets, indicating their potential utility as valuable clinical tools for the evaluation of lumbar spine pathologies. Future studies with larger sample sizes and multicenter data are warranted to validate the generalizability of the model further and to explore the potential integration of this approach into routine clinical practice. Full article
Show Figures

Figure 1

18 pages, 10605 KiB  
Article
BUU-LSPINE: A Thai Open Lumbar Spine Dataset for Spondylolisthesis Detection
by Podchara Klinwichit, Watcharaphong Yookwan, Sornsupha Limchareon, Krisana Chinnasarn, Jun-Su Jang and Athita Onuean
Appl. Sci. 2023, 13(15), 8646; https://doi.org/10.3390/app13158646 - 27 Jul 2023
Cited by 22 | Viewed by 13098
Abstract
(1) Background: Spondylolisthesis, a common disease among older individuals, involves the displacement of vertebrae. The condition may gradually manifest with age, allowing for potential prevention by the research of predictive algorithms. However, one key issue that hinders research in spondylolisthesis prediction algorithms is [...] Read more.
(1) Background: Spondylolisthesis, a common disease among older individuals, involves the displacement of vertebrae. The condition may gradually manifest with age, allowing for potential prevention by the research of predictive algorithms. However, one key issue that hinders research in spondylolisthesis prediction algorithms is the need for publicly available spondylolisthesis datasets. (2) Purpose: This paper introduces BUU-LSPINE, a new dataset for the lumbar spine. It includes 3600 patients’ plain film images annotated with vertebral position, spondylolisthesis diagnosis, and lumbosacral transitional vertebrae (LSTV) ground truth. (4) Methods: We established an annotation pipeline to create the BUU-SPINE dataset and evaluated it in three experiments as follows: (1) lumbar vertebrae detection, (2) vertebral corner points extraction, and (3) spondylolisthesis prediction. (5) Results: Lumbar vertebrae detection achieved the highest precision rates of 81.93% on the AP view and 83.45% on the LA view using YOLOv5; vertebral corner point extraction achieved the lowest average error distance of 4.63 mm on the AP view using ResNet152V2 and 4.91 mm on the LA view using DenseNet201. Spondylolisthesis prediction reached the highest accuracy of 95.14% on the AP view and 92.26% on the LA view of a testing set using Support Vector Machine (SVM). (6) Discussions: The results of the three experiments highlight the potential of BUU-LSPINE in developing and evaluating algorithms for lumbar vertebrae detection and spondylolisthesis prediction. These steps are crucial in advancing the creation of a clinical decision support system (CDSS). Additionally, the findings demonstrate the impact of Lumbosacral transitional vertebrae (LSTV) conditions on lumbar detection algorithms. Full article
Show Figures

Figure 1

17 pages, 2061 KiB  
Article
NAMSTCD: A Novel Augmented Model for Spinal Cord Segmentation and Tumor Classification Using Deep Nets
by Ricky Mohanty, Sarah Allabun, Sandeep Singh Solanki, Subhendu Kumar Pani, Mohammed S. Alqahtani, Mohamed Abbas and Ben Othman Soufiene
Diagnostics 2023, 13(8), 1417; https://doi.org/10.3390/diagnostics13081417 - 14 Apr 2023
Cited by 5 | Viewed by 2938
Abstract
Spinal cord segmentation is the process of identifying and delineating the boundaries of the spinal cord in medical images such as magnetic resonance imaging (MRI) or computed tomography (CT) scans. This process is important for many medical applications, including the diagnosis, treatment planning, [...] Read more.
Spinal cord segmentation is the process of identifying and delineating the boundaries of the spinal cord in medical images such as magnetic resonance imaging (MRI) or computed tomography (CT) scans. This process is important for many medical applications, including the diagnosis, treatment planning, and monitoring of spinal cord injuries and diseases. The segmentation process involves using image processing techniques to identify the spinal cord in the medical image and differentiate it from other structures, such as the vertebrae, cerebrospinal fluid, and tumors. There are several approaches to spinal cord segmentation, including manual segmentation by a trained expert, semi-automated segmentation using software tools that require some user input, and fully automated segmentation using deep learning algorithms. Researchers have proposed a wide range of system models for segmentation and tumor classification in spinal cord scans, but the majority of these models are designed for a specific segment of the spine. As a result, their performance is limited when applied to the entire lead, limiting their deployment scalability. This paper proposes a novel augmented model for spinal cord segmentation and tumor classification using deep nets to overcome this limitation. The model initially segments all five spinal cord regions and stores them as separate datasets. These datasets are manually tagged with cancer status and stage based on observations from multiple radiologist experts. Multiple Mask Regional Convolutional Neural Networks (MRCNNs) were trained on various datasets for region segmentation. The results of these segmentations were combined using a combination of VGGNet 19, YoLo V2, ResNet 101, and GoogLeNet models. These models were selected via performance validation on each segment. It was observed that VGGNet-19 was capable of classifying the thoracic and cervical regions, while YoLo V2 was able to efficiently classify the lumbar region, ResNet 101 exhibited better accuracy for sacral-region classification, and GoogLeNet was able to classify the coccygeal region with high performance accuracy. Due to use of specialized CNN models for different spinal cord segments, the proposed model was able to achieve a 14.5% better segmentation efficiency, 98.9% tumor classification accuracy, and a 15.6% higher speed performance when averaged over the entire dataset and compared with various state-of-the art models. This performance was observed to be better, due to which it can be used for various clinical deployments. Moreover, this performance was observed to be consistent across multiple tumor types and spinal cord regions, which makes the model highly scalable for a wide variety of spinal cord tumor classification scenarios. Full article
(This article belongs to the Special Issue Diagnosis of Brain Tumors)
Show Figures

Figure 1

20 pages, 9805 KiB  
Article
Research on Automatic Classification and Detection of Mutton Multi-Parts Based on Swin-Transformer
by Shida Zhao, Zongchun Bai, Shucai Wang and Yue Gu
Foods 2023, 12(8), 1642; https://doi.org/10.3390/foods12081642 - 14 Apr 2023
Cited by 6 | Viewed by 2133
Abstract
In order to realize the real-time classification and detection of mutton multi-part, this paper proposes a mutton multi-part classification and detection method based on the Swin-Transformer. First, image augmentation techniques are adopted to increase the sample size of the sheep thoracic vertebrae and [...] Read more.
In order to realize the real-time classification and detection of mutton multi-part, this paper proposes a mutton multi-part classification and detection method based on the Swin-Transformer. First, image augmentation techniques are adopted to increase the sample size of the sheep thoracic vertebrae and scapulae to overcome the problems of long-tailed distribution and non-equilibrium of the dataset. Then, the performances of three structural variants of the Swin-Transformer (Swin-T, Swin-B, and Swin-S) are compared through transfer learning, and the optimal model is obtained. On this basis, the robustness, generalization, and anti-occlusion abilities of the model are tested and analyzed using the significant multiscale features of the lumbar vertebrae and thoracic vertebrae, by simulating different lighting environments and occlusion scenarios, respectively. Furthermore, the model is compared with five methods commonly used in object detection tasks, namely Sparser-CNN, YoloV5, RetinaNet, CenterNet, and HRNet, and its real-time performance is tested under the following pixel resolutions: 576 × 576, 672 × 672, and 768 × 768. The results show that the proposed method achieves a mean average precision (mAP) of 0.943, while the mAP for the robustness, generalization, and anti-occlusion tests are 0.913, 0.857, and 0.845, respectively. Moreover, the model outperforms the five aforementioned methods, with mAP values that are higher by 0.009, 0.027, 0.041, 0.050, and 0.113, respectively. The average processing time of a single image with this model is 0.25 s, which meets the production line requirements. In summary, this study presents an efficient and intelligent mutton multi-part classification and detection method, which can provide technical support for the automatic sorting of mutton as well as for the processing of other livestock meat. Full article
(This article belongs to the Special Issue Machine Vision Applications in Food)
Show Figures

Graphical abstract

58 pages, 1476 KiB  
Review
Pituitary Apoplexy in Patients with Pituitary Neuroendocrine Tumors (PitNET)
by Ana-Maria Gheorghe, Alexandra Ioana Trandafir, Nina Ionovici, Mara Carsote, Claudiu Nistor, Florina Ligia Popa and Mihaela Stanciu
Biomedicines 2023, 11(3), 680; https://doi.org/10.3390/biomedicines11030680 - 23 Feb 2023
Cited by 7 | Viewed by 4877
Abstract
Various complications of pituitary neuroendocrine tumors (PitNET) are reported, and an intratumor hemorrhage or infarct underlying pituitary apoplexy (PA) represents an uncommon, yet potentially life-threatening, feature, and thus early recognition and prompt intervention are important. Our purpose is to overview PA from clinical [...] Read more.
Various complications of pituitary neuroendocrine tumors (PitNET) are reported, and an intratumor hemorrhage or infarct underlying pituitary apoplexy (PA) represents an uncommon, yet potentially life-threatening, feature, and thus early recognition and prompt intervention are important. Our purpose is to overview PA from clinical presentation to management and outcome. This is a narrative review of the English-language, PubMed-based original articles from 2012 to 2022 concerning PA, with the exception of pregnancy- and COVID-19-associated PA, and non-spontaneous PA (prior specific therapy for PitNET). We identified 194 original papers including 1452 patients with PA (926 males, 525 females, and one transgender male; a male-to-female ratio of 1.76; mean age at PA diagnostic of 50.52 years, the youngest being 9, the oldest being 85). Clinical presentation included severe headache in the majority of cases (but some exceptions are registered, as well); neuro-ophthalmic panel with nausea and vomiting, meningism, and cerebral ischemia; respectively, decreased visual acuity to complete blindness in two cases; visual field defects: hemianopia, cranial nerve palsies manifesting as diplopia in the majority, followed by ptosis and ophthalmoplegia (most frequent cranial nerve affected was the oculomotor nerve, and, rarely, abducens and trochlear); proptosis (N = 2 cases). Risk factors are high blood pressure followed by diabetes mellitus as the main elements. Qualitative analysis also pointed out infections, trauma, hematologic conditions (thrombocytopenia, polycythemia), Takotsubo cardiomyopathy, and T3 thyrotoxicosis. Iatrogenic elements may be classified into three main categories: medication, diagnostic tests and techniques, and surgical procedures. The first group is dominated by anticoagulant and antiplatelet drugs; additionally, at a low level of statistical evidence, we mention androgen deprivation therapy for prostate cancer, chemotherapy, thyroxine therapy, oral contraceptives, and phosphodiesterase 5 inhibitors. The second category includes a dexamethasone suppression test, clomiphene use, combined endocrine stimulation tests, and a regadenoson myocardial perfusion scan. The third category involves major surgery, laparoscopic surgery, coronary artery bypass surgery, mitral valvuloplasty, endonasal surgery, and lumbar fusion surgery in a prone position. PA in PitNETs still represents a challenging condition requiring a multidisciplinary team from first presentation to short- and long-term management. Controversies involve the specific panel of risk factors and adequate protocols with concern to neurosurgical decisions and their timing versus conservative approach. The present decade-based analysis, to our knowledge the largest so far on published cases, confirms a lack of unanimous approach and criteria of intervention, a large panel of circumstantial events, and potential triggers with different levels of statistical significance, in addition to a heterogeneous clinical picture (if any, as seen in subacute PA) and a spectrum of evolution that varies from spontaneous remission and control of PitNET-associated hormonal excess to exitus. Awareness is mandatory. A total of 25 cohorts have been published so far with more than 10 PA cases/studies, whereas the largest cohorts enrolled around 100 patients. Further studies are necessary. Full article
Show Figures

Figure 1

15 pages, 3554 KiB  
Article
Deep Learning-Based Approaches for Classifying Foraminal Stenosis Using Cervical Spine Radiographs
by Jiho Park, Jaejun Yang, Sehan Park and Jihie Kim
Electronics 2023, 12(1), 195; https://doi.org/10.3390/electronics12010195 - 31 Dec 2022
Cited by 7 | Viewed by 4435
Abstract
Various disease detection models, based on deep learning algorithms using medical radiograph images (MRI, CT, and X-ray), have been actively explored in relation to medicine and computer vision. For diseases related to the spine, primarily MRI-based or CT-based studies have been conducted, but [...] Read more.
Various disease detection models, based on deep learning algorithms using medical radiograph images (MRI, CT, and X-ray), have been actively explored in relation to medicine and computer vision. For diseases related to the spine, primarily MRI-based or CT-based studies have been conducted, but most studies were associated with the lumbar spine, not the cervical spine. Foraminal stenosis offers important clues in diagnosing cervical radiculopathy, which is usually detected based on MRI data because it is difficult even for experts to diagnose using only an X-ray examination. However, MRI examinations are expensive, placing a potential burden on patients. Therefore, this paper proposes a novel model for diagnosing foraminal stenosis using only X-ray images. In addition, we propose methods suitable for cervical spine X-ray images to improve the performance of the proposed classification model. First, the proposed model adopts data preprocessing and augmentation methods, including Histogram Equalization, Flip, and Spatial Transformer Networks. Second, we apply fine-tuned transfer learning using a pre-trained ResNet50 with cervical spine X-ray images. Compared to the basic ResNet50 model, the proposed method improves the performance of foraminal stenosis diagnosis by approximately 5.3–6.9%, 5.2–6.5%, 5.4–9.2%, and 0.8–4.3% in Accuracy, F1 score, specificity, and sensitivity, respectively. We expect that the proposed model can contribute towards reducing the cost of expensive examinations by detecting foraminal stenosis using X-ray images only. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering, Volume II)
Show Figures

Figure 1

Back to TopTop