Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = RSNA

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 2110 KB  
Article
NGBoost Classifier Using Deep Features for Pneumonia Chest X-Ray Classification
by Nagashree Satish Chandra, Shyla Raj and B. S. Mahanand
Appl. Sci. 2025, 15(17), 9821; https://doi.org/10.3390/app15179821 - 8 Sep 2025
Viewed by 854
Abstract
Pneumonia remains a major global health concern, leading to significant mortality and morbidity. The identification of pneumonia by chest X-rays can be difficult due to its similarity to other lung disorders. In this paper, Natural Gradiant Boost (NGBoost) classifier is employed on deep [...] Read more.
Pneumonia remains a major global health concern, leading to significant mortality and morbidity. The identification of pneumonia by chest X-rays can be difficult due to its similarity to other lung disorders. In this paper, Natural Gradiant Boost (NGBoost) classifier is employed on deep features obtained from ResNet50 model to classify chest X-ray images as normal or pneumonia-affected. NGBoost classifier, a probabilistic machine learning model is used in this study to evaluate the discriminative power of handcrafted features like haar, shape and texture and deep features obtained from convolution neural network models like ResNet50, DenseNet121 and VGG16. The dataset used in this study is obtained from the pneumonia RSNA challenge, which consists of 26,684 chest X-ray images. The experimental results show that NGBoost classifier obtained an accuracy of 0.98 using deep features extracted from ResNet50 model. From the analysis, it is found that deep features play an important role in pneumonia chest X-ray classification. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 618 KB  
Article
Artificial Intelligence for Individualized Radiological Dialogue: The Impact of RadioBot on Precision-Driven Medical Practices
by Amato Infante, Alessandro Perna, Sabrina Chiloiro, Giammaria Marziali, Matia Martucci, Luigi Demarchis, Biagio Merlino, Luigi Natale and Simona Gaudino
J. Pers. Med. 2025, 15(8), 363; https://doi.org/10.3390/jpm15080363 - 8 Aug 2025
Viewed by 669
Abstract
Background/Objectives: Radiology often presents communication challenges due to its technical complexity, particularly for patients, trainees, and non-specialist clinicians. This study aims to evaluate the effectiveness of RadioBot, an AI-powered chatbot developed on the Botpress platform, in enhancing radiological communication through natural language processing [...] Read more.
Background/Objectives: Radiology often presents communication challenges due to its technical complexity, particularly for patients, trainees, and non-specialist clinicians. This study aims to evaluate the effectiveness of RadioBot, an AI-powered chatbot developed on the Botpress platform, in enhancing radiological communication through natural language processing (NLP). Methods: RadioBot was designed to provide context-sensitive responses based on guidelines from the American College of Radiology (ACR) and the Radiological Society of North America (RSNA). It addresses queries related to imaging indications, contraindications, preparation, and post-procedural care. A structured evaluation was conducted with twelve participants—patients, residents, and radiologists—who assessed the chatbot using a standardized quality and satisfaction scale. Results: The chatbot received high satisfaction scores, particularly from patients (mean = 4.425) and residents (mean = 4.250), while radiologists provided more critical feedback (mean = 3.775). Users appreciated the system’s clarity, accessibility, and its role in reducing informational bottlenecks. The perceived usefulness of the chatbot inversely correlated with the user’s level of expertise, serving as an educational tool for novices and a time-saving reference for experts. Conclusions: RadioBot demonstrates strong potential in improving radiological communication and supporting clinical workflows, especially with patients where it plays an important role in personalized medicine by framing radiology data within each individual’s cognitive and emotional context, which improves understanding and reduces associated diagnostic anxiety. Despite limitations such as occasional contextual incoherence and limited multimodal capabilities, the system effectively disseminates radiological knowledge. Future developments should focus on enhancing personalization based on user specialization and exploring alternative platforms to optimize performance and user experience. Full article
Show Figures

Figure 1

21 pages, 4793 KB  
Article
Deep Learning for Glioblastoma Multiforme Detection from MRI: A Statistical Analysis for Demographic Bias
by Kebin Contreras, Julio Gutierrez-Rengifo, Oscar Casanova-Carvajal, Angel Luis Alvarez, Patricia E. Vélez-Varela and Ana Lorena Urbano-Bojorge
Appl. Sci. 2025, 15(11), 6274; https://doi.org/10.3390/app15116274 - 3 Jun 2025
Cited by 2 | Viewed by 1450
Abstract
Glioblastoma, IDH-wildtype (GBM), is the most aggressive and complex brain tumour classified by the World Health Organization (WHO), characterised by high mortality rates and diagnostic limitations inherent to invasive conventional procedures. Early detection is essential for improving patient outcomes, underscoring the need for [...] Read more.
Glioblastoma, IDH-wildtype (GBM), is the most aggressive and complex brain tumour classified by the World Health Organization (WHO), characterised by high mortality rates and diagnostic limitations inherent to invasive conventional procedures. Early detection is essential for improving patient outcomes, underscoring the need for non-invasive diagnostic tools. This study presents a convolutional neural network (CNN) specifically optimised for GBM detection from T1-weighted magnetic resonance imaging (MRI), with systematic evaluations of layer depth, activation functions, and hyperparameters. The model was trained on the RSNA-MICCAI data set and externally validated on the Erasmus Glioma Database (EGD), which includes gliomas of various grades and preserves cranial structures, unlike the skull-stripped RSNA-MICCAI images. This morphological discrepancy demonstrates the generalisation capacity of the model across anatomical and acquisition differences, achieving an F1-score of 0.88. Furthermore, statistical tests, such as Shapiro–Wilk, Mann–Whitney U, and Chi-square, confirmed the absence of demographic bias in model predictions, based on p-values, confidence intervals, and statistical power analyses supporting its demographic fairness. The proposed model achieved an area under the curve–receiver operating characteristic (AUC-ROC) of 0.63 on the RSNA-MICCAI test set, surpassing all prior results submitted to the BraTS 2021 challenge, and establishing a reliable and generalisable approach for non-invasive GBM detection. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Computer Vision)
Show Figures

Figure 1

19 pages, 2258 KB  
Article
A Multidimensional Particle Swarm Optimization-Based Algorithm for Brain MRI Tumor Segmentation
by Zsombor Boga, Csanád Sándor and Péter Kovács
Sensors 2025, 25(9), 2800; https://doi.org/10.3390/s25092800 - 29 Apr 2025
Cited by 4 | Viewed by 1591
Abstract
Particle Swarm Optimization (PSO) has been extensively applied to optimization tasks in various domains, including image segmentation. In this work, we present a clustering-based segmentation algorithm that employs a multidimensional variant of PSO. Unlike conventional methods that require a predefined number of segments, [...] Read more.
Particle Swarm Optimization (PSO) has been extensively applied to optimization tasks in various domains, including image segmentation. In this work, we present a clustering-based segmentation algorithm that employs a multidimensional variant of PSO. Unlike conventional methods that require a predefined number of segments, our approach automatically selects an optimal segmentation granularity based on specified similarity criteria. This strategy effectively isolates brain tumors by incorporating both grayscale intensity and spatial information across multiple MRI modalities, allowing the method to be reliably tuned using a limited amount of training data. We further demonstrate how integrating these initial segmentations with a random forest classifier (RFC) enhances segmentation precision. Using MRI data from the RSNA-ASNR-MICCAI brain tumor segmentation (BraTS) challenge, our method achieves robust results with reduced reliance on extensive labeled datasets, offering a more efficient path toward accurate, clinically relevant tumor segmentation. Full article
(This article belongs to the Special Issue Sensors and Machine-Learning Based Signal Processing)
Show Figures

Figure 1

17 pages, 5725 KB  
Article
Classification of the ICU Admission for COVID-19 Patients with Transfer Learning Models Using Chest X-Ray Images
by Yun-Chi Lin and Yu-Hua Dean Fang
Diagnostics 2025, 15(7), 845; https://doi.org/10.3390/diagnostics15070845 - 26 Mar 2025
Cited by 2 | Viewed by 1333
Abstract
Objectives: Predicting intensive care unit (ICU) admissions during pandemic outbreaks such as COVID-19 can assist clinicians in early intervention and the better allocation of medical resources. Artificial intelligence (AI) tools are promising for this task, but their development can be hindered by [...] Read more.
Objectives: Predicting intensive care unit (ICU) admissions during pandemic outbreaks such as COVID-19 can assist clinicians in early intervention and the better allocation of medical resources. Artificial intelligence (AI) tools are promising for this task, but their development can be hindered by the limited availability of training data. This study aims to explore model development strategies in data-limited scenarios, specifically in detecting the need for ICU admission using chest X-rays of COVID-19 patients by leveraging transfer learning and data extension to improve model performance. Methods: We explored convolutional neural networks (CNNs) pre-trained on either natural images or chest X-rays, fine-tuning them on a relatively limited dataset (COVID-19-NY-SBU, n = 899) of lung-segmented X-ray images for ICU admission classification. To further address data scarcity, we introduced a dataset extension strategy that integrates an additional dataset (MIDRC-RICORD-1c, n = 417) with different but clinically relevant labels. Results: The TorchX-SBU-RSNA and ELIXR-SBU-RSNA models, leveraging X-ray-pre-trained models with our training data extension approach, enhanced ICU admission classification performance from a baseline AUC of 0.66 (56% sensitivity and 68% specificity) to AUCs of 0.77–0.78 (58–62% sensitivity and 78–80% specificity). The gradient-weighted class activation mapping (Grad-CAM) analysis demonstrated that the TorchX-SBU-RSNA model focused more precisely on the relevant lung regions and reduced the distractions from non-relevant areas compared to the natural image-pre-trained model without data expansion. Conclusions: This study demonstrates the benefits of medical image-specific pre-training and strategic dataset expansion in enhancing the model performance of imaging AI models. Moreover, this approach demonstrates the potential of using diverse but limited data sources to alleviate the limitations of model development for medical imaging AI. The developed AI models and training strategies may facilitate more effective and efficient patient management and resource allocation in future outbreaks of infectious respiratory diseases. Full article
Show Figures

Figure 1

23 pages, 89226 KB  
Article
Improving Vertebral Fracture Detection in C-Spine CT Images Using Bayesian Probability-Based Ensemble Learning
by Abhishek Kumar Pandey, Kedarnath Senapati, Ioannis K. Argyros and G. P. Pateel
Algorithms 2025, 18(4), 181; https://doi.org/10.3390/a18040181 - 21 Mar 2025
Cited by 1 | Viewed by 1144
Abstract
Vertebral fracture (VF) may induce spinal cord injury that can lead to serious consequences which eventually may paralyze the entire or some parts of the body depending on the location and severity of the injury. Diagnosis of VFs is crucial at the initial [...] Read more.
Vertebral fracture (VF) may induce spinal cord injury that can lead to serious consequences which eventually may paralyze the entire or some parts of the body depending on the location and severity of the injury. Diagnosis of VFs is crucial at the initial stage, which may be challenging because of the subtle features, noise, and homogeneity present in the computed tomography (CT) images. In this study, Wide ResNet-40, DenseNet-121, and EfficientNet-B7 are chosen, fine-tuned, and used as base models, and a Bayesian-based probabilistic ensemble learning method is proposed for fracture detection in cervical spine CT images. The proposed method considers the prediction’s uncertainty of the base models and combines the predictions obtained from them, to improve the overall performance significantly. This method assigns weights to the base learners, based on their performance and confidence about the prediction. To increase the robustness of the proposed model, custom data augmentation techniques are performed in the preprocessing step. This work utilizes 15,123 CT images from the RSNA-2022 C-spine fracture detection challenge and demonstrates superior performance compared to the individual base learners, and the other existing conventional ensemble methods. The proposed model also outperforms the best state-of-the-art (SOTA) model by 1.62%, 0.51%, and 1.29% in terms of accuracy, specificity, and sensitivity, respectively; furthermore, the AUC score of the best SOTA model is lagging by 5%. The overall accuracy, specificity, sensitivity, and F1-score of the proposed model are 94.62%, 93.51%, 95.29%, and 93.16%, respectively. Full article
Show Figures

Figure 1

14 pages, 10065 KB  
Article
Automatic Evaluation of Bone Age Using Hand Radiographs and Pancorporal Radiographs in Adolescent Idiopathic Scoliosis
by Ifrah Andleeb, Bilal Zahid Hussain, Julie Joncas, Soraya Barchi, Marjolaine Roy-Beaudry, Stefan Parent, Guy Grimard, Hubert Labelle and Luc Duong
Diagnostics 2025, 15(4), 452; https://doi.org/10.3390/diagnostics15040452 - 13 Feb 2025
Cited by 1 | Viewed by 5418
Abstract
Background/Objectives: Adolescent idiopathic scoliosis (AIS) is a complex, three-dimensional spinal deformity that requires monitoring of skeletal maturity for effective management. Accurate bone age assessment is important for evaluating developmental progress in AIS. Traditional methods rely on ossification center observations, but recent advances in [...] Read more.
Background/Objectives: Adolescent idiopathic scoliosis (AIS) is a complex, three-dimensional spinal deformity that requires monitoring of skeletal maturity for effective management. Accurate bone age assessment is important for evaluating developmental progress in AIS. Traditional methods rely on ossification center observations, but recent advances in deep learning (DL) might pave the way for automatic grading of bone age. Methods: The goal of this research is to propose a new deep neural network (DNN) and evaluate class activation maps for bone age assessment in AIS using hand radiographs. We developed a custom neural network based on DenseNet201 and trained it on the RSNA Bone Age dataset. Results: The model achieves an average mean absolute error (MAE) of 4.87 months on more than 250 clinical testing AIS patient dataset. To enhance transparency and trust, we introduced Score-CAM, an explainability tool that reveals the regions of interest contributing to accurate bone age predictions. We compared our model with the BoneXpert system, demonstrating similar performance, which signifies the potential of our approach to reduce inter-rater variability and expedite clinical decision-making. Conclusions: This study outlines the role of deep learning in improving the precision and efficiency of bone age assessment, particularly for AIS patients. Future work involves the detection of other regions of interest and the integration of other ossification centers. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

14 pages, 3305 KB  
Article
Pneumonia Disease Detection Using Chest X-Rays and Machine Learning
by Cathryn Usman, Saeed Ur Rehman, Anwar Ali, Adil Mehmood Khan and Baseer Ahmad
Algorithms 2025, 18(2), 82; https://doi.org/10.3390/a18020082 - 3 Feb 2025
Cited by 6 | Viewed by 5991
Abstract
Pneumonia is a deadly disease affecting millions worldwide, caused by microorganisms and environmental factors. It leads to lung fluid build-up, making breathing difficult, and is a leading cause of death. Early detection and treatment are crucial for preventing severe outcomes. Chest X-rays are [...] Read more.
Pneumonia is a deadly disease affecting millions worldwide, caused by microorganisms and environmental factors. It leads to lung fluid build-up, making breathing difficult, and is a leading cause of death. Early detection and treatment are crucial for preventing severe outcomes. Chest X-rays are commonly used for diagnoses due to their accessibility and low costs; however, detecting pneumonia through X-rays is challenging. Automated methods are needed, and machine learning can solve complex computer vision problems in medical imaging. This research develops a robust machine learning model for the early detection of pneumonia using chest X-rays, leveraging advanced image processing techniques and deep learning algorithms that accurately identify pneumonia patterns, enabling prompt diagnosis and treatment. The research develops a CNN model from the ground up and a ResNet-50 pretrained model This study uses the RSNA pneumonia detection challenge original dataset comprising 26,684 chest array images collected from unique patients (56% male, 44% females) to build a machine learning model for the early detection of pneumonia. The data are made up of pneumonia (31.6%) and non-pneumonia (68.8%), providing an effective foundation for the model training and evaluation. A reduced size of the dataset was used to examine the impact of data size and both versions were tested with and without the use of augmentation. The models were compared with existing works, the model’s effectiveness in detecting pneumonia was compared with one another, and the impact of augmentation and the dataset size on the performance of the models was examined. The overall best accuracy achieved was that of the CNN model from scratch, with no augmentation, an accuracy of 0.79, a precision of 0.76, a recall of 0.73, and an F1 score of 0.74. However, the pretrained model, with lower overall accuracy, was found to be more generalizable. Full article
Show Figures

Figure 1

15 pages, 10087 KB  
Article
BAE-ViT: An Efficient Multimodal Vision Transformer for Bone Age Estimation
by Jinnian Zhang, Weijie Chen, Tanmayee Joshi, Xiaomin Zhang, Po-Ling Loh, Varun Jog, Richard J. Bruce, John W. Garrett and Alan B. McMillan
Tomography 2024, 10(12), 2058-2072; https://doi.org/10.3390/tomography10120146 - 13 Dec 2024
Cited by 3 | Viewed by 3680
Abstract
This research introduces BAE-ViT, a specialized vision transformer model developed for bone age estimation (BAE). This model is designed to efficiently merge image and sex data, a capability not present in traditional convolutional neural networks (CNNs). BAE-ViT employs a novel data fusion method [...] Read more.
This research introduces BAE-ViT, a specialized vision transformer model developed for bone age estimation (BAE). This model is designed to efficiently merge image and sex data, a capability not present in traditional convolutional neural networks (CNNs). BAE-ViT employs a novel data fusion method to facilitate detailed interactions between visual and non-visual data by tokenizing non-visual information and concatenating all tokens (visual or non-visual) as the input to the model. The model underwent training on a large-scale dataset from the 2017 RSNA Pediatric Bone Age Machine Learning Challenge, where it exhibited commendable performance, particularly excelling in handling image distortions compared to existing models. The effectiveness of BAE-ViT was further affirmed through statistical analysis, demonstrating a strong correlation with the actual ground-truth labels. This study contributes to the field by showcasing the potential of vision transformers as a viable option for integrating multimodal data in medical imaging applications, specifically emphasizing their capacity to incorporate non-visual elements like sex information into the framework. This tokenization method not only demonstrates superior performance in this specific task but also offers a versatile framework for integrating multimodal data in medical imaging applications. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

20 pages, 6541 KB  
Article
CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound
by Yi-Ming Wang, Chi-Yuan Wang, Kuo-Ying Liu, Yung-Hui Huang, Tai-Been Chen, Kon-Ning Chiu, Chih-Yu Liang and Nan-Han Lu
Tomography 2024, 10(12), 2038-2057; https://doi.org/10.3390/tomography10120145 (registering DOI) - 12 Dec 2024
Cited by 6 | Viewed by 2941
Abstract
Background/Objectives: Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to [...] Read more.
Background/Objectives: Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. Materials and Methods: Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. Results: The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. Conclusions: This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis. Full article
Show Figures

Figure 1

13 pages, 5146 KB  
Article
Tracking the Rareness of Diseases: Improving Long-Tail Medical Detection with a Calibrated Diffusion Model
by Tianjiao Zhang, Chaofan Ma and Yanfeng Wang
Electronics 2024, 13(23), 4693; https://doi.org/10.3390/electronics13234693 - 27 Nov 2024
Viewed by 1067
Abstract
Motivation: Chest X-ray (CXR) is a routine diagnostic X-ray examination for checking and screening various diseases. Automatically localizing and classifying diseases from CXR as a detection task is of much significance for subsequent diagnosis and treatment. Due to the fact that samples of [...] Read more.
Motivation: Chest X-ray (CXR) is a routine diagnostic X-ray examination for checking and screening various diseases. Automatically localizing and classifying diseases from CXR as a detection task is of much significance for subsequent diagnosis and treatment. Due to the fact that samples of some diseases are difficult to acquire, CXR detection datasets often present a long-tail distribution over different diseases. Objective: The detection performance of tail classes is very poor due to the limited number and diversity of samples in the training dataset and should be improved. Method: In this paper, motivated by a correspondence-based tracking system, we build a pipeline named RaTrack, leveraging a diffusion model to alleviate the tail class degradation problem by aligning the generation process of the tail to the head class. Then, the samples of rare classes are generated to extend the number and diversity of rare samples. In addition, we propose a filtering strategy to control the quality of the generated samples. Results: Extensive experiments on public datasets, Vindr-CXR and RSNA, demonstrate the effectiveness of the proposed method, especially for rare diseases. Full article
(This article belongs to the Special Issue Advances in Visual Tracking: Emerging Techniques and Applications)
Show Figures

Figure 1

12 pages, 503 KB  
Article
A Critical Examination of Academic Hospital Practices—Paving the Way for Standardized Structured Reports in Neuroimaging
by Ashwag Rafea Alruwaili, Abdullah Abu Jamea, Reema N. Alayed, Alhatoun Y. Alebrah, Reem Y. Alshowaiman, Loulwah A. Almugbel, Ataf G. Heikal, Ahad S. Alkhanbashi and Anwar A. Maflahi
J. Clin. Med. 2024, 13(15), 4334; https://doi.org/10.3390/jcm13154334 - 25 Jul 2024
Cited by 1 | Viewed by 1563
Abstract
Background/Objectives: Imaging studies are often an integral part of patient evaluation and serve as the primary means of communication between radiologists and referring physicians. This study aimed to evaluate brain Magnetic Resonance Imaging (MRI) reports and to determine whether these reports follow a [...] Read more.
Background/Objectives: Imaging studies are often an integral part of patient evaluation and serve as the primary means of communication between radiologists and referring physicians. This study aimed to evaluate brain Magnetic Resonance Imaging (MRI) reports and to determine whether these reports follow a standardized or narrative format. Methods: A series of 466 anonymized MRI reports from an academic hospital were downloaded from the Picture Archiving and Communication System (PACS) in portable document format (pdf) for the period between August 2017 and March 2018. Two hundred brain MRI reports, written by four radiologists, were compared to a structured report template from the Radiology Society of North America (RSNA) and were included, whereas MR-modified techniques, such as MRI orbits and MR venography reports, were excluded (n = 266). All statistical analyses were conducted using Statistical Package for the Social Sciences (SPSS) statistical software (version 16.4.1, MedCalc Software). Results: None of the included studies used the RSNA template for structured reports (SRs). The highest number of brain-reported pathologies was for vascular disease (24%), while the lowest was for infections (3.5%) and motor dysfunction (5.5%). Radiologists specified the Technique (n = 170, 85%), Clinical Information (n = 187, 93.5%), and Impression (n = 197, 98.5%) in almost all reports. However, information in the Findings section was often missing. As hypothesized, radiologists with less experience showed a greater commitment to reporting additional elements than those with more experience. Conclusions: The SR template for medical imaging has been accessible online for over a decade. However, many hospitals and radiologists still use the free-text style for reporting. Our study was conducted in an academic hospital with a fellowship program, and we found that structured reporting had not yet been implemented. As the health system transitions towards teleservices and teleradiology, more efforts need to be put into advocating standardized reporting in medical imaging. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

13 pages, 1064 KB  
Article
The Total Denervation of the Ischemic Kidney Induces Differential Responses in Sodium Transporters’ Expression in the Contralateral Kidney in Goldblatt Rats
by Caroline G. Shimoura, Tales L. Oliveira, Gisele S. Lincevicius, Renato O. Crajoinas, Elizabeth B. Oliveira-Sales, Vanessa A. Varela, Guiomar N. Gomes, Cassia T. Bergamaschi and Ruy R. Campos
Int. J. Mol. Sci. 2024, 25(13), 6962; https://doi.org/10.3390/ijms25136962 - 26 Jun 2024
Cited by 4 | Viewed by 1981
Abstract
The Goldblatt model of hypertension (2K-1C) in rats is characterized by renal sympathetic nerve activity (rSNA). We investigated the effects of unilateral renal denervation of the clipped kidney (DNX) on sodium transporters of the unclipped kidneys and the cardiovascular, autonomic, and renal functions [...] Read more.
The Goldblatt model of hypertension (2K-1C) in rats is characterized by renal sympathetic nerve activity (rSNA). We investigated the effects of unilateral renal denervation of the clipped kidney (DNX) on sodium transporters of the unclipped kidneys and the cardiovascular, autonomic, and renal functions in 2K-1C and control (CTR) rats. The mean arterial pressure (MAP) and rSNA were evaluated in experimental groups. Kidney function and NHE3, NCC, ENaCβ, and ENaCγ protein expressions were assessed. The glomerular filtration rate (GRF) and renal plasma flow were not changed by DNX, but the urinary (CTR: 0.0042 ± 0.001; 2K-1C: 0.014 ± 0.003; DNX: 0.005 ± 0.0013 mL/min/g renal tissue) and filtration fractions (CTR: 0.29 ± 0.02; 2K-1C: 0.51 ± 0.06; DNX: 0.28 ± 0.04 mL/min/g renal tissue) were normalized. The Na+/H+ exchanger (NHE3) was reduced in 2K-1C, and DNX normalized NHE3 (CTR: 100 ± 6; 2K-1C: 44 ± 14, DNX: 84 ± 13%). Conversely, the Na+/Cl cotransporter (NCC) was increased in 2K-1C and was reduced by DNX (CTR: 94 ± 6; 2K-1C: 144 ± 8; DNX: 60 ± 15%). In conclusion, DNX in Goldblatt rats reduced blood pressure and proteinuria independently of GRF with a distinct regulation of NHE3 and NCC in unclipped kidneys. Full article
(This article belongs to the Section Molecular Pathology, Diagnostics, and Therapeutics)
Show Figures

Figure 1

13 pages, 1594 KB  
Article
The Effects of Volatile Anesthetics on Renal Sympathetic and Phrenic Nerve Activity during Acute Intermittent Hypoxia in Rats
by Josip Krnić, Katarina Madirazza, Renata Pecotić, Benjamin Benzon, Mladen Carev and Zoran Đogaš
Biomedicines 2024, 12(4), 910; https://doi.org/10.3390/biomedicines12040910 - 19 Apr 2024
Viewed by 2219
Abstract
Coordinated activation of sympathetic and respiratory nervous systems is crucial in responses to noxious stimuli such as intermittent hypoxia. Acute intermittent hypoxia (AIH) is a valuable model for studying obstructive sleep apnea (OSA) pathophysiology, and stimulation of breathing during AIH is known to [...] Read more.
Coordinated activation of sympathetic and respiratory nervous systems is crucial in responses to noxious stimuli such as intermittent hypoxia. Acute intermittent hypoxia (AIH) is a valuable model for studying obstructive sleep apnea (OSA) pathophysiology, and stimulation of breathing during AIH is known to elicit long-term changes in respiratory and sympathetic functions. The aim of this study was to record the renal sympathetic nerve activity (RSNA) and phrenic nerve activity (PNA) during the AIH protocol in rats exposed to monoanesthesia with sevoflurane or isoflurane. Adult male Sprague-Dawley rats (n = 24; weight: 280–360 g) were selected and randomly divided into three groups: two experimental groups (sevoflurane group, n = 6; isoflurane group, n = 6) and a control group (urethane group, n = 12). The AIH protocol was identical in all studied groups and consisted in delivering five 3 min-long hypoxic episodes (fraction of inspired oxygen, FiO2 = 0.09), separated by 3 min recovery intervals at FiO2 = 0.5. Volatile anesthetics, isoflurane and sevoflurane, blunted the RSNA response to AIH in comparison to urethane anesthesia. Additionally, the PNA response to acute intermittent hypoxia was preserved, indicating that the respiratory system might be more robust than the sympathetic system response during exposure to acute intermittent hypoxia. Full article
(This article belongs to the Section Drug Discovery, Development and Delivery)
Show Figures

Figure 1

6 pages, 993 KB  
Proceeding Paper
Classification of Breast Cancer Using Radiological Society of North America Data by EfficientNet
by Hoang Nhut Huynh, Ngoc An Dang Nguyen, Anh Tu Tran, Van Chinh Nguyen and Trung Nghia Tran
Eng. Proc. 2023, 55(1), 6; https://doi.org/10.3390/engproc2023055006 - 27 Nov 2023
Cited by 2 | Viewed by 1880
Abstract
Breast cancer is a common cancer that affects women all over the world. Therefore, detection at an early stage is crucial for reducing the mortality rate linked to this disease. Mammography is the primary screening method for breast cancer. However, it has drawbacks, [...] Read more.
Breast cancer is a common cancer that affects women all over the world. Therefore, detection at an early stage is crucial for reducing the mortality rate linked to this disease. Mammography is the primary screening method for breast cancer. However, it has drawbacks, including high rates of false-positive and negative results, inter-observer variability, and limited sensitivity with dense breast tissue. To solve such problems, breast cancer was analyzed and classified using mammography images and deep learning models from the Radiological Society of North America (RSNA) database. This database contains processed and raw images from the RSNA that consist of annotated malignancies and clinical data. Using deep learning models based on convolutional neural network (CNN) models such as visual geometry group (VGG), Googlenet, EfficientNet, and Residual Networks, mammograms were classified into cancer or non-cancer categories. In this study, a novel architecture was proposed by combining CNNs and attention mechanisms, which extracted and highlighted the relevant features. A dataset of 8000 patients with 47,000 photographs was used to train and assess the model via 5-fold cross-validation. The results outperformed prior methods using the same database and reached an average accuracy rate of 95%. The results showed that mammography with deep learning methods considerably improved breast cancer detection and diagnosis. Full article
Show Figures

Figure 1

Back to TopTop