Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

Search Results (36)

Search Parameters:
Journal = Tomography
Section = Artificial Intelligence in Medical Imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3835 KiB  
Article
Structured Transformation of Unstructured Prostate MRI Reports Using Large Language Models
by Luca Di Palma, Fatemeh Darvizeh, Marco Alì and Deborah Fazzini
Tomography 2025, 11(6), 69; https://doi.org/10.3390/tomography11060069 - 17 Jun 2025
Viewed by 536
Abstract
Objectives: to assess the ability of high-performing open-weight large language models (LLMs) in extracting key radiological features from prostate MRI reports. Methods: Five LLMs (Llama3.3, DeepSeek-R1-Llama3.3, Phi4, Gemma-2, and Qwen2.5-14B) were used to analyze free-text MRI reports retrieved from clinical practice. Each LLM [...] Read more.
Objectives: to assess the ability of high-performing open-weight large language models (LLMs) in extracting key radiological features from prostate MRI reports. Methods: Five LLMs (Llama3.3, DeepSeek-R1-Llama3.3, Phi4, Gemma-2, and Qwen2.5-14B) were used to analyze free-text MRI reports retrieved from clinical practice. Each LLM processed reports three times using specialized prompts to extract (1) dimensions, (2) volume and PSA density, and (3) lesion characteristics. An experienced radiologist manually annotated the dataset, defining entities (Exam) and sub-entities (Lesion, Dimension). Feature- and physician-level performance were then assessed. Results: 250 MRI exams reported by 7 radiologists were analyzed by the LLMs. Feature-level performances showed that DeepSeek-R1-Llama3.3 exhibited the highest average score (98.6% ± 2.1%), followed by Phi4 (98.1% ± 2.2%), Llama3.3 (98.0% ± 3.0%), Qwen2.5 (97.5% ± 3.9%), and Gemma2 (96.0% ± 3.4%). All models excelled in extracting PSA density (100%) and volume (≥98.4%), while lesions’ extraction showed greater variability (88.4–94.0%). LLMs’ performance varied among radiologists: Physician B’s reports yielded the highest mean score (99.9% ± 0.2%), while Physician C’s resulted in the lowest (94.4% ± 2.3%). Conclusions: LLMs showed promising results in automated feature-extraction from radiology reports, with DeepSeek-R1-Llama3.3 achieving the highest overall score. These models can improve clinical workflows by structuring unstructured medical text. However, a preliminary analysis of reporting styles is necessary to identify potential challenges and optimize prompt design to better align with individual physician reporting styles. This approach can further enhance the robustness and adaptability of LLM-driven clinical data extraction. Full article
Show Figures

Figure 1

45 pages, 2926 KiB  
Review
Medical Image Segmentation: A Comprehensive Review of Deep Learning-Based Methods
by Yuxiao Gao, Yang Jiang, Yanhong Peng, Fujiang Yuan, Xinyue Zhang and Jianfeng Wang
Tomography 2025, 11(5), 52; https://doi.org/10.3390/tomography11050052 - 30 Apr 2025
Cited by 8 | Viewed by 7649
Abstract
Medical image segmentation is a critical application of computer vision in the analysis of medical images. Its primary objective is to isolate regions of interest in medical images from the background, thereby assisting clinicians in accurately identifying lesions, their sizes, locations, and their [...] Read more.
Medical image segmentation is a critical application of computer vision in the analysis of medical images. Its primary objective is to isolate regions of interest in medical images from the background, thereby assisting clinicians in accurately identifying lesions, their sizes, locations, and their relationships with surrounding tissues. However, compared to natural images, medical images present unique challenges, such as low resolution, poor contrast, inconsistency, and scattered target regions. Furthermore, the accuracy and stability of segmentation results are subject to more stringent requirements. In recent years, with the widespread application of Convolutional Neural Networks (CNNs) in computer vision, deep learning-based methods for medical image segmentation have become a focal point of research. This paper categorizes, reviews, and summarizes the current representative methods and research status in the field of medical image segmentation. A comparative analysis of relevant experiments is presented, along with an introduction to commonly used public datasets, performance evaluation metrics, and loss functions in medical image segmentation. Finally, potential future research directions and development trends in this field are predicted and analyzed. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

21 pages, 3563 KiB  
Article
Performance Evaluation of Image Segmentation Using Dual-Energy Spectral CT Images with Deep Learning Image Reconstruction: A Phantom Study
by Haoyan Li, Zhenpeng Chen, Shuaiyi Gao, Jiaqi Hu, Zhihao Yang, Yun Peng and Jihang Sun
Tomography 2025, 11(5), 51; https://doi.org/10.3390/tomography11050051 - 27 Apr 2025
Viewed by 795
Abstract
Objectives: To evaluate the medical image segmentation performance of monochromatic images in various energy levels. Methods: The low-density module (25 mm in diameter, 6 Hounsfield Unit (HU) in density difference from background) from the ACR464 phantom was scanned at both 10 [...] Read more.
Objectives: To evaluate the medical image segmentation performance of monochromatic images in various energy levels. Methods: The low-density module (25 mm in diameter, 6 Hounsfield Unit (HU) in density difference from background) from the ACR464 phantom was scanned at both 10 mGy and 5 mGy dose levels. Virtual monoenergetic images (VMIs) at different energy levels of 40, 50, 60, 68, 74, and 100 keV were generated. The images at 10 mGy reconstructed with 50% adaptive statistical iterative reconstruction veo (ASIR-V50%) were used to train an image segmentation model based on U-Net. The evaluation set used 5 mGy VMIs reconstructed with various reconstruction algorithms: FBP, ASIR-V50%, ASIR-V100%, deep learning image reconstruction (DLIR) with low (DLIR-L), medium (DLIR-M), and high (DLIR-H) strength levels. U-Net was employed as a tool to compare algorithm performance. Image noise and segmentation metrics, such as the DICE coefficient, intersection over union (IOU), sensitivity, and Hausdorff distance, were calculated to assess both image quality and segmentation performance. Results: DLIR-M and DLIR-H consistently achieved lower image noise and better segmentation performance, with the highest results observed at 60 keV, and DLIR-H had the lowest image noise across all energy levels. The performance metrics, including IOU, DICE, and sensitivity, were ranked in descending order with energy levels of 60 keV, 68 keV, 50 keV, 74 keV, 40 keV, and 100 keV. Specifically, at 60 keV, the average IOU values for each reconstruction method were 0.60 for FBP, 0.67 for ASIR-V50%, 0.68 for ASIR-V100%, 0.72 for DLIR-L, 0.75 for DLIR-M, and 0.75 for DLIR-H. The average DICE values were 0.75, 0.80, 0.82, 0.83, 0.85, and 0.86. The sensitivity values were 0.93, 0.91, 0.96, 0.95, 0.98, and 0.98. Conclusions: For low-density, non-enhancing objects under a low dose, the 60 keV VMIs performed better in automatic segmentation. DLIR-M and DLIR-H algorithms delivered the best results, whereas DLIR-H provided the lowest image noise and highest sensitivity. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

16 pages, 11837 KiB  
Article
Deep Learning-Driven Abbreviated Shoulder MRI Protocols: Diagnostic Accuracy in Clinical Practice
by Giovanni Foti, Flavio Spoto, Thomas Mignolli, Alessandro Spezia, Luigi Romano, Guglielmo Manenti, Nicolò Cardobi and Paolo Avanzi
Tomography 2025, 11(4), 48; https://doi.org/10.3390/tomography11040048 - 17 Apr 2025
Viewed by 950
Abstract
Background: Deep learning (DL) reconstruction techniques have shown promise in reducing MRI acquisition times while maintaining image quality. However, the impact of different acceleration factors on diagnostic accuracy in shoulder MRI remains unexplored in clinical practice. Purpose: The purpose of this study was [...] Read more.
Background: Deep learning (DL) reconstruction techniques have shown promise in reducing MRI acquisition times while maintaining image quality. However, the impact of different acceleration factors on diagnostic accuracy in shoulder MRI remains unexplored in clinical practice. Purpose: The purpose of this study was to evaluate the diagnostic accuracy of 2-fold and 4-fold DL-accelerated shoulder MRI protocols compared to standard protocols in clinical practice. Materials and Methods: In this prospective single-center study, 88 consecutive patients (49 males, 39 females; mean age, 51 years) underwent shoulder MRI examinations using standard, 2-fold (DL2), and 4-fold (DL4) accelerated protocols between June 2023 and January 2024. Four independent radiologists (experience range: 4–25 years) evaluated the presence of bone marrow edema (BME), rotator cuff tears, and labral lesions. The sensitivity, specificity, and interobserver agreement were calculated. Diagnostic confidence was assessed using a 4-point scale. The impact of reader experience was analyzed by stratifying the radiologists into ≤10 and >10 years of experience. Results: Both accelerated protocols demonstrated high diagnostic accuracy. For BME detection, DL2 and DL4 achieved 100% sensitivity and specificity. In rotator cuff evaluation, DL2 showed a sensitivity of 98–100% and specificity of 99–100%, while DL4 maintained a sensitivity of 95–98% and specificity of 99–100%. Labral tear detection showed perfect sensitivity (100%) with DL2 and slightly lower sensitivity (89–100%) with DL4. Interobserver agreement was excellent across the protocols (Kendall’s W = 0.92–0.98). Reader experience did not significantly impact diagnostic performance. The area under the ROC curve was 0.94 for DL2 and 0.90 for DL4 (p = 0.32). Clinical Implications: The implementation of DL-accelerated protocols, particularly DL2, could improve workflow efficiency by reducing acquisition times by 50% while maintaining diagnostic reliability. This could increase patient throughput and accessibility to MRI examinations without compromising diagnostic quality. Conclusions: DL-accelerated shoulder MRI protocols demonstrate high diagnostic accuracy, with DL2 showing performance nearly identical to that of the standard protocol. While DL4 maintains acceptable diagnostic accuracy, it shows a slight sensitivity reduction for subtle pathologies, particularly among less experienced readers. The DL2 protocol represents an optimal balance between acquisition time reduction and diagnostic confidence. Full article
Show Figures

Figure 1

13 pages, 1951 KiB  
Article
Use of Open-Source Large Language Models for Automatic Synthesis of the Entire Imaging Medical Records of Patients: A Feasibility Study
by Fabio Mattiussi, Francesco Magoga, Simone Schiaffino, Vittorio Ferrari, Ermidio Rezzonico, Filippo Del Grande and Stefania Rizzo
Tomography 2025, 11(4), 47; https://doi.org/10.3390/tomography11040047 - 16 Apr 2025
Viewed by 1000
Abstract
Background/Objectives: Reviewing the entire history of imaging exams of a single patient’s records is an essential step in clinical practice, but it is time and resource consuming, with potential negative effects on workflow and on the quality of medical decisions. The main objective [...] Read more.
Background/Objectives: Reviewing the entire history of imaging exams of a single patient’s records is an essential step in clinical practice, but it is time and resource consuming, with potential negative effects on workflow and on the quality of medical decisions. The main objective of this study was to evaluate the applicability of three open-source large language models (LLMs) for the automatic generation of concise summaries of patient’s imaging records. Secondary objectives were to assess correlations among the LLMs and to evaluate the length reduction provided by each model. Methods: Three state-of-the-art open-source large language models were selected: Llama 3.2 11B, Mistral 7B, and Falcon 7B. Each model was given a set of radiology reports. The summaries produced by the models were evaluated by two experienced radiologists and one experienced clinical physician using standardized metrics. Results: A variable number of radiological reports (n = 12–56) from four patients were selected and evaluated. The summaries generated by the three LLM showed a good level of accuracy compared with the information contained in the original reports, with positive ratings on both clinical relevance and ease of reference. According to the experts’ evaluations, the use of the summaries generated by LLMs could help to reduce the time spent on reviewing the previous imaging examinations performed, preserving the quality of clinical data. Conclusions: Our results suggest that LLMs are able to generate summaries of the imaging history of patients, and these summaries could improve radiology workflow making it easier to manage large volumes of reports. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

12 pages, 3361 KiB  
Article
Deep Learning-Based Tumor Segmentation of Murine Magnetic Resonance Images of Prostate Cancer Patient-Derived Xenografts
by Satvik Nayak, Henry Salkever, Ernesto Diaz, Avantika Sinha, Nikhil Deveshwar, Madeline Hess, Matthew Gibbons, Sule Sahin, Abhejit Rajagopal, Peder E. Z. Larson and Renuka Sriram
Tomography 2025, 11(3), 21; https://doi.org/10.3390/tomography11030021 - 22 Feb 2025
Viewed by 1110
Abstract
Background/Objective: Longitudinal in vivo studies of murine xenograft models are widely utilized in oncology to study cancer biology and develop therapies. Magnetic resonance imaging (MRI) of these tumors is an invaluable tool for monitoring tumor growth and characterizing the tumors as well. Methods: [...] Read more.
Background/Objective: Longitudinal in vivo studies of murine xenograft models are widely utilized in oncology to study cancer biology and develop therapies. Magnetic resonance imaging (MRI) of these tumors is an invaluable tool for monitoring tumor growth and characterizing the tumors as well. Methods: In this work, a pipeline for automating the segmentation of xenografts in mouse models was developed. T2-weighted (T2-wt) MRI images from mice implanted with six different prostate cancer patient-derived xenografts (PDX) in the kidneys, liver, and tibia were used. The segmentation pipeline included a slice classifier to identify the slices that had tumors and subsequent training and validation using several U-Net-based segmentation architectures. Multiple combinations of the algorithm and training images for different sites were evaluated for inference quality. Results and Conclusions: The slice classifier network achieved 90% accuracy in identifying slices containing tumors. Among the various segmentation architectures tested, the dense residual recurrent U-Net achieved the highest performance in kidney tumors. When evaluated across the kidneys, tibia, and liver, this architecture performed the best when trained on all data as compared to training on only data from a single site (and inferring on a multi-site tumor images), achieving a Dice score of 0.924 across the test set. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

18 pages, 2544 KiB  
Article
Graph Neural Network Learning on the Pediatric Structural Connectome
by Anand Srinivasan, Rajikha Raja, John O. Glass, Melissa M. Hudson, Noah D. Sabin, Kevin R. Krull and Wilburn E. Reddick
Tomography 2025, 11(2), 14; https://doi.org/10.3390/tomography11020014 - 29 Jan 2025
Viewed by 1180
Abstract
Purpose: Sex classification is a major benchmark of previous work in learning on the structural connectome, a naturally occurring brain graph that has proven useful for studying cognitive function and impairment. While graph neural networks (GNNs), specifically graph convolutional networks (GCNs), have gained [...] Read more.
Purpose: Sex classification is a major benchmark of previous work in learning on the structural connectome, a naturally occurring brain graph that has proven useful for studying cognitive function and impairment. While graph neural networks (GNNs), specifically graph convolutional networks (GCNs), have gained popularity lately for their effectiveness in learning on graph data, achieving strong performance in adult sex classification tasks, their application to pediatric populations remains unexplored. We seek to characterize the capacity for GNN models to learn connectomic patterns on pediatric data through an exploration of training techniques and architectural design choices. Methods: Two datasets comprising an adult BRIGHT dataset (N = 147 Hodgkin’s lymphoma survivors and N = 162 age similar controls) and a pediatric Human Connectome Project in Development (HCP-D) dataset (N = 135 healthy subjects) were utilized. Two GNN models (GCN simple and GCN residual), a deep neural network (multi-layer perceptron), and two standard machine learning models (random forest and support vector machine) were trained. Architecture exploration experiments were conducted to evaluate the impact of network depth, pooling techniques, and skip connections on the ability of GNN models to capture connectomic patterns. Models were assessed across a range of metrics including accuracy, AUC score, and adversarial robustness. Results: GNNs outperformed other models across both populations. Notably, adult GNN models achieved 85.1% accuracy in sex classification on unseen adult participants, consistent with prior studies. The extension of the adult models to the pediatric dataset and training on the smaller pediatric dataset were sub-optimal in their performance. Using adult data to augment pediatric models, the best GNN achieved comparable accuracy across unseen pediatric (83.0%) and adult (81.3%) participants. Adversarial sensitivity experiments showed that the simple GCN remained the most robust to perturbations, followed by the multi-layer perceptron and the residual GCN. Conclusions: These findings underscore the potential of GNNs in advancing our understanding of sex-specific neurological development and disorders and highlight the importance of data augmentation in overcoming challenges associated with small pediatric datasets. Further, they highlight relevant tradeoffs in the design landscape of connectomic GNNs. For example, while the simpler GNN model tested exhibits marginally worse accuracy and AUC scores in comparison to the more complex residual GNN, it demonstrates a higher degree of adversarial robustness. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

12 pages, 6373 KiB  
Article
Impact of Deep Learning 3D CT Super-Resolution on AI-Based Pulmonary Nodule Characterization
by Dongok Kim, Chulkyun Ahn and Jong Hyo Kim
Tomography 2025, 11(2), 13; https://doi.org/10.3390/tomography11020013 - 27 Jan 2025
Viewed by 1500
Abstract
Background/Objectives: Correct pulmonary nodule volumetry and categorization is paramount for accurate diagnosis in lung cancer screening programs. CT scanners with slice thicknesses of multiple millimetres are still common worldwide, and slice thickness has an adverse effect on the accuracy of the pulmonary nodule [...] Read more.
Background/Objectives: Correct pulmonary nodule volumetry and categorization is paramount for accurate diagnosis in lung cancer screening programs. CT scanners with slice thicknesses of multiple millimetres are still common worldwide, and slice thickness has an adverse effect on the accuracy of the pulmonary nodule volumetry. Methods: We propose a deep learning based super-resolution technique to generate thin-slice CT images from thick-slice CT images. Analysis of the lung nodule volumetry and categorization accuracy was performed using commercially available AI-based lung cancer screening software. Results: The accuracy of pulmonary nodule categorization increased from 72.7 percent to 94.5 percent when thick-slice CT images were converted to generated-thin-slice CT images. Conclusions: Applying the super-resolution-based slice generation on thick-slice CT images prior to automatic nodule evaluation significantly increases the accuracy of pulmonary nodule volumetry and corresponding pulmonary nodule category. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

15 pages, 2408 KiB  
Article
Dual-Stage AI Model for Enhanced CT Imaging: Precision Segmentation of Kidney and Tumors
by Nalan Karunanayake, Lin Lu, Hao Yang, Pengfei Geng, Oguz Akin, Helena Furberg, Lawrence H. Schwartz and Binsheng Zhao
Tomography 2025, 11(1), 3; https://doi.org/10.3390/tomography11010003 - 3 Jan 2025
Cited by 1 | Viewed by 2004
Abstract
Objectives: Accurate kidney and tumor segmentation of computed tomography (CT) scans is vital for diagnosis and treatment, but manual methods are time-consuming and inconsistent, highlighting the value of AI automation. This study develops a fully automated AI model using vision transformers (ViTs) and [...] Read more.
Objectives: Accurate kidney and tumor segmentation of computed tomography (CT) scans is vital for diagnosis and treatment, but manual methods are time-consuming and inconsistent, highlighting the value of AI automation. This study develops a fully automated AI model using vision transformers (ViTs) and convolutional neural networks (CNNs) to detect and segment kidneys and kidney tumors in Contrast-Enhanced (CECT) scans, with a focus on improving sensitivity for small, indistinct tumors. Methods: The segmentation framework employs a ViT-based model for the kidney organ, followed by a 3D UNet model with enhanced connections and attention mechanisms for tumor detection and segmentation. Two CECT datasets were used: a public dataset (KiTS23: 489 scans) and a private institutional dataset (Private: 592 scans). The AI model was trained on 389 public scans, with validation performed on the remaining 100 scans and external validation performed on all 592 private scans. Tumors were categorized by TNM staging as small (≤4 cm) (KiTS23: 54%, Private: 41%), medium (>4 cm to ≤7 cm) (KiTS23: 24%, Private: 35%), and large (>7 cm) (KiTS23: 22%, Private: 24%) for detailed evaluation. Results: Kidney and kidney tumor segmentations were evaluated against manual annotations as the reference standard. The model achieved a Dice score of 0.97 ± 0.02 for kidney organ segmentation. For tumor detection and segmentation on the KiTS23 dataset, the sensitivities and average false-positive rates per patient were as follows: 0.90 and 0.23 for small tumors, 1.0 and 0.08 for medium tumors, and 0.96 and 0.04 for large tumors. The corresponding Dice scores were 0.84 ± 0.11, 0.89 ± 0.07, and 0.91 ± 0.06, respectively. External validation on the private data confirmed the model’s effectiveness, achieving the following sensitivities and average false-positive rates per patient: 0.89 and 0.15 for small tumors, 0.99 and 0.03 for medium tumors, and 1.0 and 0.01 for large tumors. The corresponding Dice scores were 0.84 ± 0.08, 0.89 ± 0.08, and 0.92 ± 0.06. Conclusions: The proposed model demonstrates consistent and robust performance in segmenting kidneys and kidney tumors of various sizes, with effective generalization to unseen data. This underscores the model’s significant potential for clinical integration, offering enhanced diagnostic precision and reliability in radiological assessments. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

16 pages, 2495 KiB  
Article
Evaluating Medical Image Segmentation Models Using Augmentation
by Mattin Sayed, Sari Saba-Sadiya, Benedikt Wichtlhuber, Julia Dietz, Matthias Neitzel, Leopold Keller, Gemma Roig and Andreas M. Bucher
Tomography 2024, 10(12), 2128-2143; https://doi.org/10.3390/tomography10120150 - 23 Dec 2024
Viewed by 2102
Abstract
Background: Medical image segmentation is an essential step in both clinical and research applications, and automated segmentation models—such as TotalSegmentator—have become ubiquitous. However, robust methods for validating the accuracy of these models remain limited, and manual inspection is often necessary before the segmentation [...] Read more.
Background: Medical image segmentation is an essential step in both clinical and research applications, and automated segmentation models—such as TotalSegmentator—have become ubiquitous. However, robust methods for validating the accuracy of these models remain limited, and manual inspection is often necessary before the segmentation masks produced by these models can be used. Methods: To address this gap, we have developed a novel validation framework for segmentation models, leveraging data augmentation to assess model consistency. We produced segmentation masks for both the original and augmented scans, and we calculated the alignment metrics between these segmentation masks. Results: Our results demonstrate strong correlation between the segmentation quality of the original scan and the average alignment between the masks of the original and augmented CT scans. These results were further validated by supporting metrics, including the coefficient of variance and the average symmetric surface distance, indicating that agreement with augmented-scan segmentation masks is a valid proxy for segmentation quality. Conclusions: Overall, our framework offers a pipeline for evaluating segmentation performance without relying on manually labeled ground truth data, establishing a foundation for future advancements in automated medical image analysis. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

18 pages, 3821 KiB  
Article
A Hybrid CNN-Transformer Model for Predicting N Staging and Survival in Non-Small Cell Lung Cancer Patients Based on CT-Scan
by Lingfei Wang, Chenghao Zhang and Jin Li
Tomography 2024, 10(10), 1676-1693; https://doi.org/10.3390/tomography10100123 - 10 Oct 2024
Cited by 5 | Viewed by 4716
Abstract
Accurate assessment of N staging in patients with non-small cell lung cancer (NSCLC) is critical for the development of effective treatment plans, the optimization of therapeutic strategies, and the enhancement of patient survival rates. This study proposes a hybrid model based on 3D [...] Read more.
Accurate assessment of N staging in patients with non-small cell lung cancer (NSCLC) is critical for the development of effective treatment plans, the optimization of therapeutic strategies, and the enhancement of patient survival rates. This study proposes a hybrid model based on 3D convolutional neural networks (CNNs) and transformers for predicting the N-staging and survival rates of NSCLC patients within the NSCLC radiogenomics and Nsclc-radiomics datasets. The model achieved accuracies of 0.805, 0.828, and 0.819 for the training, validation, and testing sets, respectively. By leveraging the strengths of CNNs in local feature extraction and the superior performance of transformers in global information modeling, the model significantly enhances predictive accuracy and efficacy. A comparative analysis with traditional CNN and transformer architectures demonstrates that the CNN-transformer hybrid model outperforms N-staging predictions. Furthermore, this study extracts the one-year survival rate as a feature and employs the Lasso–Cox model for survival predictions at various time intervals (1, 3, 5, and 7 years), with all survival prediction p-values being less than 0.05, illustrating the time-dependent nature of survival analysis. The application of time-dependent ROC curves further validates the model’s accuracy and reliability for survival predictions. Overall, this research provides innovative methodologies and new insights for the early diagnosis and prognostic evaluation of NSCLC. Full article
Show Figures

Figure 1

13 pages, 2490 KiB  
Article
A Joint Classification Method for COVID-19 Lesions Based on Deep Learning and Radiomics
by Guoxiang Ma, Kai Wang, Ting Zeng, Bin Sun and Liping Yang
Tomography 2024, 10(9), 1488-1500; https://doi.org/10.3390/tomography10090109 - 5 Sep 2024
Viewed by 1515
Abstract
Pneumonia caused by novel coronavirus is an acute respiratory infectious disease. Its rapid spread in a short period of time has brought great challenges for global public health. The use of deep learning and radiomics methods can effectively distinguish the subtypes of lung [...] Read more.
Pneumonia caused by novel coronavirus is an acute respiratory infectious disease. Its rapid spread in a short period of time has brought great challenges for global public health. The use of deep learning and radiomics methods can effectively distinguish the subtypes of lung diseases, provide better clinical prognosis accuracy, and assist clinicians, enabling them to adjust the clinical management level in time. The main goal of this study is to verify the performance of deep learning and radiomics methods in the classification of COVID-19 lesions and reveal the image characteristics of COVID-19 lung disease. An MFPN neural network model was proposed to extract the depth features of lesions, and six machine-learning methods were used to compare the classification performance of deep features, key radiomics features and combined features for COVID-19 lung lesions. The results show that in the COVID-19 image classification task, the classification method combining radiomics and deep features can achieve good classification results and has certain clinical application value. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

25 pages, 2865 KiB  
Review
Machine Learning and Deep Learning Approaches in Lifespan Brain Age Prediction: A Comprehensive Review
by Yutong Wu, Hongjian Gao, Chen Zhang, Xiangge Ma, Xinyu Zhu, Shuicai Wu and Lan Lin
Tomography 2024, 10(8), 1238-1262; https://doi.org/10.3390/tomography10080093 - 12 Aug 2024
Cited by 6 | Viewed by 4137
Abstract
The concept of ‘brain age’, derived from neuroimaging data, serves as a crucial biomarker reflecting cognitive vitality and neurodegenerative trajectories. In the past decade, machine learning (ML) and deep learning (DL) integration has transformed the field, providing advanced models for brain age estimation. [...] Read more.
The concept of ‘brain age’, derived from neuroimaging data, serves as a crucial biomarker reflecting cognitive vitality and neurodegenerative trajectories. In the past decade, machine learning (ML) and deep learning (DL) integration has transformed the field, providing advanced models for brain age estimation. However, achieving precise brain age prediction across all ages remains a significant analytical challenge. This comprehensive review scrutinizes advancements in ML- and DL-based brain age prediction, analyzing 52 peer-reviewed studies from 2020 to 2024. It assesses various model architectures, highlighting their effectiveness and nuances in lifespan brain age studies. By comparing ML and DL, strengths in forecasting and methodological limitations are revealed. Finally, key findings from the reviewed articles are summarized and a number of major issues related to ML/DL-based lifespan brain age prediction are discussed. Through this study, we aim at the synthesis of the current state of brain age prediction, emphasizing both advancements and persistent challenges, guiding future research, technological advancements, and improving early intervention strategies for neurodegenerative diseases. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Graphical abstract

17 pages, 4737 KiB  
Article
Novel Deep CNNs Explore Regions, Boundaries, and Residual Learning for COVID-19 Infection Analysis in Lung CT
by Bader Khalid Alshemaimri
Tomography 2024, 10(8), 1205-1221; https://doi.org/10.3390/tomography10080091 - 3 Aug 2024
Cited by 1 | Viewed by 1394
Abstract
COVID-19 poses a global health crisis, necessitating precise diagnostic methods for timely containment. However, accurately delineating COVID-19-affected regions in lung CT scans is challenging due to contrast variations and significant texture diversity. In this regard, this study introduces a novel two-stage classification and [...] Read more.
COVID-19 poses a global health crisis, necessitating precise diagnostic methods for timely containment. However, accurately delineating COVID-19-affected regions in lung CT scans is challenging due to contrast variations and significant texture diversity. In this regard, this study introduces a novel two-stage classification and segmentation CNN approach for COVID-19 lung radiological pattern analysis. A novel Residual-BRNet is developed to integrate boundary and regional operations with residual learning, capturing key COVID-19 radiological homogeneous regions, texture variations, and structural contrast patterns in the classification stage. Subsequently, infectious CT images undergo lesion segmentation using the newly proposed RESeg segmentation CNN in the second stage. The RESeg leverages both average and max-pooling implementations to simultaneously learn region homogeneity and boundary-related patterns. Furthermore, novel pixel attention (PA) blocks are integrated into RESeg to effectively address mildly COVID-19-infected regions. The evaluation of the proposed Residual-BRNet CNN in the classification stage demonstrates promising performance metrics, achieving an accuracy of 97.97%, F1-score of 98.01%, sensitivity of 98.42%, and MCC of 96.81%. Meanwhile, PA-RESeg in the segmentation phase achieves an optimal segmentation performance with an IoU score of 98.43% and a dice similarity score of 95.96% of the lesion region. The framework’s effectiveness in detecting and segmenting COVID-19 lesions highlights its potential for clinical applications. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

12 pages, 1596 KiB  
Review
Residual Lung Abnormalities in Survivors of Severe or Critical COVID-19 at One-Year Follow-Up Computed Tomography: A Narrative Review Comparing the European and East Asian Experiences
by Andrea Borghesi, Pietro Ciolli, Elisabetta Antonelli, Alessandro Monti, Alessandra Scrimieri, Marco Ravanelli, Roberto Maroldi and Davide Farina
Tomography 2024, 10(1), 25-36; https://doi.org/10.3390/tomography10010003 - 30 Dec 2023
Viewed by 1882
Abstract
The literature reports that there was a significant difference in the medical impact of the coronavirus disease (COVID-19) pandemic between European and East Asian countries; specifically, the mortality rate of COVID-19 in Europe was significantly higher than that in East Asia. Considering such [...] Read more.
The literature reports that there was a significant difference in the medical impact of the coronavirus disease (COVID-19) pandemic between European and East Asian countries; specifically, the mortality rate of COVID-19 in Europe was significantly higher than that in East Asia. Considering such a difference, our narrative review aimed to compare the prevalence and characteristics of residual lung abnormalities at one-year follow-up computed tomography (CT) after severe or critical COVID-19 in survivors of European and East Asian countries. A literature search was performed to identify articles focusing on the prevalence and characteristics of CT lung abnormalities in survivors of severe or critical COVID-19. Database analysis identified 16 research articles, 9 from Europe and 7 from East Asia (all from China). Our analysis found a higher prevalence of CT lung abnormalities in European than in Chinese studies (82% vs. 52%). While the most prevalent lung abnormalities in Chinese studies were ground-glass opacities (35%), the most prevalent lung abnormalities in European studies were linear (59%) and reticular opacities (55%), followed by bronchiectasis (46%). Although our findings required confirmation, the higher prevalence and severity of lung abnormalities in European than in Chinese survivors of COVID-19 may reflect a greater architectural distortion due to a more severe lung damage. Full article
(This article belongs to the Special Issue The Challenge of Advanced Medical Imaging Data Analysis in COVID-19)
Show Figures

Figure 1

Back to TopTop