Artificial Intelligence in Medical Image Processing and Segmentation, 2nd Edition

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (31 January 2025) | Viewed by 22841

Special Issue Editors


E-Mail Website
Guest Editor
Department of Experimental and Clinical Medicine, Magna Graecia University, 88100 Catanzaro, Italy
Interests: medical image processing; radiotherapy; image guided surgery; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Biomedical Engineering, Karlsruhe Institute of Technology (KIT), D-76131 Karlsruhe, Germany
Interests: radiation therapy; biomedical imaging; 3D image processing; biomedical engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, Artificial Intelligence (AI) has deeply revolutionized the field of medical image processing. In particular, image segmentation has been the task that most benefited from such an innovation.

This boost led to great advancements in the translation of AI algorithms from the laboratory to real clinical practice, especially for computer-aided diagnosis and image-guided surgery.

As a result, the first medical devices relying on AI algorithms to treat or diagnose patients were recently introduced to the market.

We are pleased to invite you to submit your work to this Special Issue, which will focus on the cutting-edge developments of AI applied to the medical image field.

The journal will be accepting contributions (both original articles and reviews) mainly centered on the following topics:

    Medical image segmentation;

    AI-based medical image registration;

    Medical image recognition;

    Patient/treatment stratification based on AI image processing;

    Synthetic medical image generation;

    Image-guided surgery/radiotherapy based on AI;

    Radiomics;

    Explainable AI in medicine.

Dr. Paolo Zaffino
Prof. Dr. Maria Francesca Spadea
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • keywords medical image processing
  • image segmentation
  • computer-aided diagnosis
  • image guided surgery
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 2991 KiB  
Article
Automatic Blob Detection Method for Cancerous Lesions in Unsupervised Breast Histology Images
by Vincent Majanga, Ernest Mnkandla, Zenghui Wang and Donatien Koulla Moulla
Bioengineering 2025, 12(4), 364; https://doi.org/10.3390/bioengineering12040364 - 31 Mar 2025
Viewed by 257
Abstract
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the [...] Read more.
The early detection of cancerous lesions is a challenging task given the cancer biology and the variability in tissue characteristics, thus rendering medical image analysis tedious and time-inefficient. In the past, conventional computer-aided diagnosis (CAD) and detection methods have heavily relied on the visual inspection of medical images, which is ineffective, particularly for large and visible cancerous lesions in such images. Additionally, conventional methods face challenges in analyzing objects in large images due to overlapping/intersecting objects and the inability to resolve their image boundaries/edges. Nevertheless, the early detection of breast cancer lesions is a key determinant for diagnosis and treatment. In this study, we present a deep learning-based technique for breast cancer lesion detection, namely blob detection, which automatically detects hidden and inaccessible cancerous lesions in unsupervised human breast histology images. Initially, this approach prepares and pre-processes data through various augmentation methods to increase the dataset size. Secondly, a stain normalization technique is applied to the augmented images to separate nucleus features from tissue structures. Thirdly, morphology operation techniques, namely erosion, dilation, opening, and a distance transform, are used to enhance the images by highlighting foreground and background pixels while removing overlapping regions from the highlighted nucleus objects in the image. Subsequently, image segmentation is handled via the connected components method, which groups highlighted pixel components with similar intensity values and assigns them to their relevant labeled components (binary masks). These binary masks are then used in the active contours method for further segmentation by highlighting the boundaries/edges of ROIs. Finally, a deep learning recurrent neural network (RNN) model automatically detects and extracts cancerous lesions and their edges from the histology images via the blob detection method. This proposed approach utilizes the capabilities of both the connected components method and the active contours method to resolve the limitations of blob detection. This detection method is evaluated on 27,249 unsupervised, augmented human breast cancer histology dataset images, and it shows a significant evaluation result in the form of a 98.82% F1 accuracy score. Full article
Show Figures

Figure 1

12 pages, 1955 KiB  
Article
Automated Assessment of the Pulmonary Artery-to-Ascending Aorta Ratio in Fetal Cardiac Ultrasound Screening Using Artificial Intelligence
by Rina Aoyama, Masaaki Komatsu, Naoaki Harada, Reina Komatsu, Akira Sakai, Katsuji Takeda, Naoki Teraya, Ken Asada, Syuzo Kaneko, Kazuki Iwamoto, Ryu Matsuoka, Akihiko Sekizawa and Ryuji Hamamoto
Bioengineering 2024, 11(12), 1256; https://doi.org/10.3390/bioengineering11121256 - 12 Dec 2024
Viewed by 1223
Abstract
The three-vessel view (3VV) is a standardized transverse scanning plane used in fetal cardiac ultrasound screening to measure the absolute and relative diameters of the pulmonary artery (PA), ascending aorta (Ao), and superior vena cava, as required. The PA/Ao ratio is used to [...] Read more.
The three-vessel view (3VV) is a standardized transverse scanning plane used in fetal cardiac ultrasound screening to measure the absolute and relative diameters of the pulmonary artery (PA), ascending aorta (Ao), and superior vena cava, as required. The PA/Ao ratio is used to support the diagnosis of congenital heart disease (CHD). However, vascular diameters are measured manually by examiners, which causes intra- and interobserver variability in clinical practice. In the present study, we aimed to develop an artificial intelligence-based method for the standardized and quantitative evaluation of 3VV. In total, 315 cases and 20 examiners were included in this study. We used the object-detection software YOLOv7 for the automated extraction of 3VV images and compared three segmentation algorithms: DeepLabv3+, UNet3+, and SegFormer. Using the PA/Ao ratios based on vascular segmentation, YOLOv7 plus UNet3+ yielded the most appropriate classification for normal fetuses and those with CHD. Furthermore, YOLOv7 plus UNet3+ achieved an arithmetic mean value of 0.883 for the area under the receiver operating characteristic curve, which was higher than 0.749 for residents and 0.808 for fellows. Our automated method may support unskilled examiners in performing quantitative and objective assessments of 3VV images during fetal cardiac ultrasound screening. Full article
Show Figures

Graphical abstract

29 pages, 4974 KiB  
Article
Depth-Aware Networks for Multi-Organ Lesion Detection in Chest CT Scans
by Han Zhang and Albert C. S. Chung
Bioengineering 2024, 11(10), 998; https://doi.org/10.3390/bioengineering11100998 - 3 Oct 2024
Viewed by 1288
Abstract
Computer tomography (CT) scans’ capabilities in detecting lesions have been increasing remarkably in the past decades. In this paper, we propose a multi-organ lesion detection (MOLD) approach to better address real-life chest-related clinical needs. MOLD is a challenging task, especially within a large, [...] Read more.
Computer tomography (CT) scans’ capabilities in detecting lesions have been increasing remarkably in the past decades. In this paper, we propose a multi-organ lesion detection (MOLD) approach to better address real-life chest-related clinical needs. MOLD is a challenging task, especially within a large, high resolution image volume, due to various types of background information interference and large differences in lesion sizes. Furthermore, the appearance similarity between lesions and other normal tissues demands more discriminative features. In order to overcome these challenges, we introduce depth-aware (DA) and skipped-layer hierarchical training (SHT) mechanisms with the novel Dense 3D context enhanced (Dense 3DCE) lesion detection model. The novel Dense 3DCE framework considers the shallow, medium, and deep-level features together comprehensively. In addition, equipped with our SHT scheme, the backpropagation process can now be supervised under precise control, while the DA scheme can effectively incorporate depth domain knowledge into the scheme. Extensive experiments have been carried out on a publicly available, widely used DeepLesion dataset, and the results prove the effectiveness of our DA-SHT Dense 3DCE network in the MOLD task. Full article
Show Figures

Figure 1

19 pages, 2463 KiB  
Article
AI-Powered Telemedicine for Automatic Scoring of Neuromuscular Examinations
by Quentin Lesport, Davis Palmie, Gülşen Öztosun, Henry J. Kaminski and Marc Garbey
Bioengineering 2024, 11(9), 942; https://doi.org/10.3390/bioengineering11090942 - 20 Sep 2024
Viewed by 1588
Abstract
Telemedicine is now being used more frequently to evaluate patients with myasthenia gravis (MG). Assessing this condition involves clinical outcome measures, such as the standardized MG-ADL scale or the more complex MG-CE score obtained during clinical exams. However, human subjectivity limits the reliability [...] Read more.
Telemedicine is now being used more frequently to evaluate patients with myasthenia gravis (MG). Assessing this condition involves clinical outcome measures, such as the standardized MG-ADL scale or the more complex MG-CE score obtained during clinical exams. However, human subjectivity limits the reliability of these examinations. We propose a set of AI-powered digital tools to improve scoring efficiency and quality using computer vision, deep learning, and natural language processing. This paper focuses on automating a standard telemedicine video by segmenting it into clips corresponding to the MG-CE assessment. This AI-powered solution offers a quantitative assessment of neurological deficits, improving upon subjective evaluations prone to examiner variability. It has the potential to enhance efficiency, patient participation in MG clinical trials, and broader applicability to various neurological diseases. Full article
Show Figures

Figure 1

21 pages, 3057 KiB  
Article
Automated Multi-Class Facial Syndrome Classification Using Transfer Learning Techniques
by Fayroz F. Sherif, Nahed Tawfik, Doaa Mousa, Mohamed S. Abdallah and Young-Im Cho
Bioengineering 2024, 11(8), 827; https://doi.org/10.3390/bioengineering11080827 - 13 Aug 2024
Cited by 1 | Viewed by 2320
Abstract
Genetic disorders affect over 6% of the global population and pose substantial obstacles to healthcare systems. Early identification of these rare facial genetic disorders is essential for managing related medical complexities and health issues. Many people consider the existing screening techniques inadequate, often [...] Read more.
Genetic disorders affect over 6% of the global population and pose substantial obstacles to healthcare systems. Early identification of these rare facial genetic disorders is essential for managing related medical complexities and health issues. Many people consider the existing screening techniques inadequate, often leading to a diagnosis several years after birth. This study evaluated the efficacy of deep learning-based classifier models for accurately recognizing dysmorphic characteristics using facial photos. This study proposes a multi-class facial syndrome classification framework that encompasses a unique combination of diseases not previously examined together. The study focused on distinguishing between individuals with four specific genetic disorders (Down syndrome, Noonan syndrome, Turner syndrome, and Williams syndrome) and healthy controls. We investigated how well fine-tuning a few well-known convolutional neural network (CNN)-based pre-trained models—including VGG16, ResNet-50, ResNet152, and VGG-Face—worked for the multi-class facial syndrome classification task. We obtained the most encouraging results by adjusting the VGG-Face model. The proposed fine-tuned VGG-Face model not only demonstrated the best performance in this study, but it also performed better than other state-of-the-art pre-trained CNN models for the multi-class facial syndrome classification task. The fine-tuned model achieved both accuracy and an F1-Score of 90%, indicating significant progress in accurately detecting the specified genetic disorders. Full article
Show Figures

Figure 1

17 pages, 5578 KiB  
Article
Interactive Cascaded Network for Prostate Cancer Segmentation from Multimodality MRI with Automated Quality Assessment
by Weixuan Kou, Cristian Rey, Harry Marshall and Bernard Chiu
Bioengineering 2024, 11(8), 796; https://doi.org/10.3390/bioengineering11080796 - 6 Aug 2024
Viewed by 1468
Abstract
The accurate segmentation of prostate cancer (PCa) from multiparametric MRI is crucial in clinical practice for guiding biopsy and treatment planning. Existing automated methods often lack the necessary accuracy and robustness in localizing PCa, whereas interactive segmentation methods, although more accurate, require user [...] Read more.
The accurate segmentation of prostate cancer (PCa) from multiparametric MRI is crucial in clinical practice for guiding biopsy and treatment planning. Existing automated methods often lack the necessary accuracy and robustness in localizing PCa, whereas interactive segmentation methods, although more accurate, require user intervention on each input image, thereby limiting the cost-effectiveness of the segmentation workflow. Our innovative framework addresses the limitations of current methods by combining a coarse segmentation network, a rejection network, and an interactive deep network known as Segment Anything Model (SAM). The coarse segmentation network automatically generates initial segmentation results, which are evaluated by the rejection network to estimate their quality. Low-quality results are flagged for user interaction, with the user providing a region of interest (ROI) enclosing the lesions, whereas for high-quality results, ROIs were cropped from the automatic segmentation. Both manually and automatically defined ROIs are fed into SAM to produce the final fine segmentation. This approach significantly reduces the annotation burden and achieves substantial improvements by flagging approximately 20% of the images with the lowest quality scores for manual annotation. With only half of the images manually annotated, the final segmentation accuracy is statistically indistinguishable from that achieved using full manual annotation. Although this paper focuses on prostate lesion segmentation from multimodality MRI, the framework can be adapted to other medical image segmentation applications to improve segmentation efficiency while maintaining high accuracy standards. Full article
Show Figures

Graphical abstract

16 pages, 6140 KiB  
Article
An Interpretable System for Screening the Severity Level of Retinopathy in Premature Infants Using Deep Learning
by Wenhan Yang, Hao Zhou, Yun Zhang, Limei Sun, Li Huang, Songshan Li, Xiaoling Luo, Yili Jin, Wei Sun, Wenjia Yan, Jing Li, Jianxiang Deng, Zhi Xie, Yao He and Xiaoyan Ding
Bioengineering 2024, 11(8), 792; https://doi.org/10.3390/bioengineering11080792 - 5 Aug 2024
Viewed by 1512
Abstract
Accurate evaluation of retinopathy of prematurity (ROP) severity is vital for screening and proper treatment. Current deep-learning-based automated AI systems for assessing ROP severity do not follow clinical guidelines and are opaque. The aim of this study is to develop an interpretable AI [...] Read more.
Accurate evaluation of retinopathy of prematurity (ROP) severity is vital for screening and proper treatment. Current deep-learning-based automated AI systems for assessing ROP severity do not follow clinical guidelines and are opaque. The aim of this study is to develop an interpretable AI system by mimicking the clinical screening process to determine ROP severity level. A total of 6100 RetCam Ⅲ wide-field digital retinal images were collected from Guangdong Women and Children Hospital at Panyu (PY) and Zhongshan Ophthalmic Center (ZOC). A total of 3330 images of 520 pediatric patients from PY were annotated to train an object detection model to detect lesion type and location. A total of 2770 images of 81 pediatric patients from ZOC were annotated for stage, zone, and the presence of plus disease. Integrating stage, zone, and the presence of plus disease according to clinical guidelines yields ROP severity such that an interpretable AI system was developed to provide the stage from the lesion type, the zone from the lesion location, and the presence of plus disease from a plus disease classification model. The ROP severity was calculated accordingly and compared with the assessment of a human expert. Our method achieved an area under the curve (AUC) of 0.95 (95% confidence interval [CI] 0.90–0.98) in assessing the severity level of ROP. Compared with clinical doctors, our method achieved the highest F1 score value of 0.76 in assessing the severity level of ROP. In conclusion, we developed an interpretable AI system for assessing the severity level of ROP that shows significant potential for use in clinical practice for ROP severity level screening. Full article
Show Figures

Figure 1

12 pages, 2799 KiB  
Article
Development of the AI Pipeline for Corneal Opacity Detection
by Kenji Yoshitsugu, Eisuke Shimizu, Hiroki Nishimura, Rohan Khemlani, Shintaro Nakayama and Tadamasa Takemura
Bioengineering 2024, 11(3), 273; https://doi.org/10.3390/bioengineering11030273 - 12 Mar 2024
Cited by 3 | Viewed by 2197
Abstract
Ophthalmological services face global inadequacies, especially in low- and middle-income countries, which are marked by a shortage of practitioners and equipment. This study employed a portable slit lamp microscope with video capabilities and cloud storage for more equitable global diagnostic resource distribution. To [...] Read more.
Ophthalmological services face global inadequacies, especially in low- and middle-income countries, which are marked by a shortage of practitioners and equipment. This study employed a portable slit lamp microscope with video capabilities and cloud storage for more equitable global diagnostic resource distribution. To enhance accessibility and quality of care, this study targets corneal opacity, which is a global cause of blindness. This study has two purposes. The first is to detect corneal opacity from videos in which the anterior segment of the eye is captured. The other is to develop an AI pipeline to detect corneal opacities. First, we extracted image frames from videos and processed them using a convolutional neural network (CNN) model. Second, we manually annotated the images to extract only the corneal margins, adjusted the contrast with CLAHE, and processed them using the CNN model. Finally, we performed semantic segmentation of the cornea using annotated data. The results showed an accuracy of 0.8 for image frames and 0.96 for corneal margins. Dice and IoU achieved a score of 0.94 for semantic segmentation of the corneal margins. Although corneal opacity detection from video frames seemed challenging in the early stages of this study, manual annotation, corneal extraction, and CLAHE contrast adjustment significantly improved accuracy. The incorporation of manual annotation into the AI pipeline, through semantic segmentation, facilitated high accuracy in detecting corneal opacity. Full article
Show Figures

Figure 1

17 pages, 3108 KiB  
Article
Deep Learning for Delineation of the Spinal Canal in Whole-Body Diffusion-Weighted Imaging: Normalising Inter- and Intra-Patient Intensity Signal in Multi-Centre Datasets
by Antonio Candito, Richard Holbrey, Ana Ribeiro, Christina Messiou, Nina Tunariu, Dow-Mu Koh and Matthew D. Blackledge
Bioengineering 2024, 11(2), 130; https://doi.org/10.3390/bioengineering11020130 - 29 Jan 2024
Viewed by 1841
Abstract
Background: Whole-Body Diffusion-Weighted Imaging (WBDWI) is an established technique for staging and evaluating treatment response in patients with multiple myeloma (MM) and advanced prostate cancer (APC). However, WBDWI scans show inter- and intra-patient intensity signal variability. This variability poses challenges in accurately quantifying [...] Read more.
Background: Whole-Body Diffusion-Weighted Imaging (WBDWI) is an established technique for staging and evaluating treatment response in patients with multiple myeloma (MM) and advanced prostate cancer (APC). However, WBDWI scans show inter- and intra-patient intensity signal variability. This variability poses challenges in accurately quantifying bone disease, tracking changes over follow-up scans, and developing automated tools for bone lesion delineation. Here, we propose a novel automated pipeline for inter-station, inter-scan image signal standardisation on WBDWI that utilizes robust segmentation of the spinal canal through deep learning. Methods: We trained and validated a supervised 2D U-Net model to automatically delineate the spinal canal (both the spinal cord and surrounding cerebrospinal fluid, CSF) in an initial cohort of 40 patients who underwent WBDWI for treatment response evaluation (80 scans in total). Expert-validated contours were used as the target standard. The algorithm was further semi-quantitatively validated on four additional datasets (three internal, one external, 207 scans total) by comparing the distributions of average apparent diffusion coefficient (ADC) and volume of the spinal cord derived from a two-component Gaussian mixture model of segmented regions. Our pipeline subsequently standardises WBDWI signal intensity through two stages: (i) normalisation of signal between imaging stations within each patient through histogram equalisation of slices acquired on either side of the station gap, and (ii) inter-scan normalisation through histogram equalisation of the signal derived within segmented spinal canal regions. This approach was semi-quantitatively validated in all scans available to the study (N = 287). Results: The test dice score, precision, and recall of the spinal canal segmentation model were all above 0.87 when compared to manual delineation. The average ADC for the spinal cord (1.7 × 10−3 mm2/s) showed no significant difference from the manual contours. Furthermore, no significant differences were found between the average ADC values of the spinal cord across the additional four datasets. The signal-normalised, high-b-value images were visualised using a fixed contrast window level and demonstrated qualitatively better signal homogeneity across scans than scans that were not signal-normalised. Conclusion: Our proposed intensity signal WBDWI normalisation pipeline successfully harmonises intensity values across multi-centre cohorts. The computational time required is less than 10 s, preserving contrast-to-noise and signal-to-noise ratios in axial diffusion-weighted images. Importantly, no changes to the clinical MRI protocol are expected, and there is no need for additional reference MRI data or follow-up scans. Full article
Show Figures

Figure 1

12 pages, 2785 KiB  
Article
Using AI Segmentation Models to Improve Foreign Body Detection and Triage from Ultrasound Images
by Lawrence Holland, Sofia I. Hernandez Torres and Eric J. Snider
Bioengineering 2024, 11(2), 128; https://doi.org/10.3390/bioengineering11020128 - 29 Jan 2024
Cited by 2 | Viewed by 2308
Abstract
Medical imaging can be a critical tool for triaging casualties in trauma situations. In remote or military medicine scenarios, triage is essential for identifying how to use limited resources or prioritize evacuation for the most serious cases. Ultrasound imaging, while portable and often [...] Read more.
Medical imaging can be a critical tool for triaging casualties in trauma situations. In remote or military medicine scenarios, triage is essential for identifying how to use limited resources or prioritize evacuation for the most serious cases. Ultrasound imaging, while portable and often available near the point of injury, can only be used for triage if images are properly acquired, interpreted, and objectively triage scored. Here, we detail how AI segmentation models can be used for improving image interpretation and objective triage evaluation for a medical application focused on foreign bodies embedded in tissues at variable distances from critical neurovascular features. Ultrasound images previously collected in a tissue phantom with or without neurovascular features were labeled with ground truth masks. These image sets were used to train two different segmentation AI frameworks: YOLOv7 and U-Net segmentation models. Overall, both approaches were successful in identifying shrapnel in the image set, with U-Net outperforming YOLOv7 for single-class segmentation. Both segmentation models were also evaluated with a more complex image set containing shrapnel, artery, vein, and nerve features. YOLOv7 obtained higher precision scores across multiple classes whereas U-Net achieved higher recall scores. Using each AI model, a triage distance metric was adapted to measure the proximity of shrapnel to the nearest neurovascular feature, with U-Net more closely mirroring the triage distances measured from ground truth labels. Overall, the segmentation AI models were successful in detecting shrapnel in ultrasound images and could allow for improved injury triage in emergency medicine scenarios. Full article
Show Figures

Figure 1

18 pages, 2992 KiB  
Article
Automatic Detection and Classification of Hypertensive Retinopathy with Improved Convolution Neural Network and Improved SVM
by Usharani Bhimavarapu, Nalini Chintalapudi and Gopi Battineni
Bioengineering 2024, 11(1), 56; https://doi.org/10.3390/bioengineering11010056 - 5 Jan 2024
Cited by 7 | Viewed by 2894
Abstract
Hypertensive retinopathy (HR) results from the microvascular retinal changes triggered by hypertension, which is the most common leading cause of preventable blindness worldwide. Therefore, it is necessary to develop an automated system for HR detection and evaluation using retinal images. We aimed to [...] Read more.
Hypertensive retinopathy (HR) results from the microvascular retinal changes triggered by hypertension, which is the most common leading cause of preventable blindness worldwide. Therefore, it is necessary to develop an automated system for HR detection and evaluation using retinal images. We aimed to propose an automated approach to identify and categorize the various degrees of HR severity. A new network called the spatial convolution module (SCM) combines cross-channel and spatial information, and the convolution operations extract helpful features. The present model is evaluated using publicly accessible datasets ODIR, INSPIREVR, and VICAVR. We applied the augmentation to artificially increase the dataset of 1200 fundus images. The different HR severity levels of normal, mild, moderate, severe, and malignant are finally classified with the reduced time when compared to the existing models because in the proposed model, convolutional layers run only once on the input fundus images, which leads to a speedup and reduces the processing time in detecting the abnormalities in the vascular structure. According to the findings, the improved SVM had the highest detection and classification accuracy rate in the vessel classification with an accuracy of 98.99% and completed the task in 160.4 s. The ten-fold classification achieved the highest accuracy of 98.99%, i.e., 0.27 higher than the five-fold classification accuracy and the improved KNN classifier achieved an accuracy of 98.72%. When computation efficiency is a priority, the proposed model’s ability to quickly recognize different HR severity levels is significant. Full article
Show Figures

Figure 1

16 pages, 3084 KiB  
Article
COVID-19 Detection via Ultra-Low-Dose X-ray Images Enabled by Deep Learning
by Isah Salim Ahmad, Na Li, Tangsheng Wang, Xuan Liu, Jingjing Dai, Yinping Chan, Haoyang Liu, Junming Zhu, Weibin Kong, Zefeng Lu, Yaoqin Xie and Xiaokun Liang
Bioengineering 2023, 10(11), 1314; https://doi.org/10.3390/bioengineering10111314 - 14 Nov 2023
Cited by 6 | Viewed by 2266
Abstract
The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging [...] Read more.
The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging technology enables rapid and accurate COVID-19 detection with minimal additional radiation exposure. In this retrospective cohort study, ULTRA-X-COVID, a deep neural network specifically designed for automatic detection of COVID-19 infections using ultra-low-dose X-ray images, is presented. The study included a multinational and multicenter dataset consisting of 30,882 X-ray images obtained from approximately 16,600 patients across 51 countries. It is important to note that there was no overlap between the training and test sets. The data analysis was conducted from 1 April 2020 to 1 January 2022. To evaluate the effectiveness of the model, various metrics such as the area under the receiver operating characteristic curve, receiver operating characteristic, accuracy, specificity, and F1 score were utilized. In the test set, the model demonstrated an AUC of 0.968 (95% CI, 0.956–0.983), accuracy of 94.3%, specificity of 88.9%, and F1 score of 99.0%. Notably, the ULTRA-X-COVID model demonstrated a performance comparable to conventional X-ray doses, with a prediction time of only 0.1 s per image. These findings suggest that the ULTRA-X-COVID model can effectively identify COVID-19 cases using ultra-low-dose X-ray scans, providing a novel alternative for COVID-19 detection. Moreover, the model exhibits potential adaptability for diagnoses of various other diseases. Full article
Show Figures

Figure 1

Back to TopTop