applsci-logo

Journal Browser

Journal Browser

Artificial Intelligence in Biomedical Image Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Biomedical Engineering".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 11655

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Science and Technology for Braininspired Intelligence and the MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200433, China
Interests: machine learning; medical imaging; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Division of Cardiovascular Medicine, Department of Medicine, Stanford University, Mailcode 5687, 453 Quarry Road, Palo Alto, CA 94304, USA
Interests: machine learning; medical imaging; cardiovascular medicine
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Johns Hopkins University, Baltimore, 21231 MD, USA
Interests: computer-aided detection and diagnosis; computer vision; medical image analysis; abdominal imaging; cancer detectionpervised learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) has seen a dramatic resurgence in the past few years. With powerful computational resources and large datasets, AI is able to analyze, featuralize, predict, and generate data, giving great potential to benefit various fields, including biomedical image processing, immensely.

On the other hand, challenges also emerge in applying AI, and the deep-learning subtype in particular, to biomedical image processing. For instance, the availability of biomedical image datasets is usually limited due to the need for laborious manual labeling, privacy, and regulatory requirements. Additionally, it is hard to acquire data with different protocols, machines, and facilities, which are critical to verify the generalizability of AI algorithms. Second, the current AI is prone to small data permutations (e.g., miss-classifying a panda as a gibbon with imperceptible noises). Addressing it is necessary and significant, especially in medicine and healthcare, to reduce misdiagnosis and mistreatment. Last but not least, most AI models are still considered black boxes and hard to interpret, largely hindering their clinical usage.

This Special Issue focuses on the subject of artificial intelligence and its application in biomedical engineering, with special attention to medical image processing. We invite authors who are interested in AI algorithms from both theoretical and practical perspectives and their application in biomedical imaging, including but not limited to data acquisition, image reconstruction, image analysis and understanding, and computer-aided diagnosis.

Dr. Hongming Shan
Dr. Ruibin Feng
Dr. Zongwei Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • computer vision
  • deep learning
  • image analysis
  • image reconstruction
  • image segmentation
  • image registration
  • computer-aided diagnosis
  • visualization in biomedical imaging

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3138 KiB  
Article
Synthetic Medical Imaging Generation with Generative Adversarial Networks for Plain Radiographs
by John R. McNulty, Lee Kho, Alexandria L. Case, David Slater, Joshua M. Abzug and Sybil A. Russell
Appl. Sci. 2024, 14(15), 6831; https://doi.org/10.3390/app14156831 - 5 Aug 2024
Viewed by 729
Abstract
In medical imaging, access to data is commonly limited due to patient privacy restrictions, and it can be difficult to acquire enough data in the case of rare diseases. The purpose of this investigation was to develop a reusable open-source synthetic image-generation pipeline, [...] Read more.
In medical imaging, access to data is commonly limited due to patient privacy restrictions, and it can be difficult to acquire enough data in the case of rare diseases. The purpose of this investigation was to develop a reusable open-source synthetic image-generation pipeline, the GAN Image Synthesis Tool (GIST), that is easy to use as well as easy to deploy. The pipeline helps to improve and standardize AI algorithms in the digital health space by generating high quality synthetic image data that is not linked to specific patients. Its image generation capabilities include the ability to generate imaging of pathologies or injuries with low incidence rates. This improvement of digital health AI algorithms could improve diagnostic accuracy, aid in patient care, decrease medicolegal claims, and ultimately decrease the overall cost of healthcare. The pipeline builds on existing Generative Adversarial Networks (GANs) algorithms, and preprocessing and evaluation steps were included for completeness. For this work, we focused on ensuring the pipeline supports radiography, with a focus on synthetic knee and elbow X-ray images. In designing the pipeline, we evaluated the performance of current GAN architectures, studying the performance on available X-ray data. We show that the pipeline is capable of generating high-quality and clinically relevant images based on a lay person’s evaluation and the Fréchet Inception Distance (FID) metric. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

14 pages, 1060 KiB  
Article
Practical Medical Image Generation with Provable Privacy Protection Based on Denoising Diffusion Probabilistic Models for High-Resolution Volumetric Images
by Hisaichi Shibata, Shouhei Hanaoka, Takahiro Nakao, Tomohiro Kikuchi, Yuta Nakamura, Yukihiro Nomura, Takeharu Yoshikawa and Osamu Abe
Appl. Sci. 2024, 14(8), 3489; https://doi.org/10.3390/app14083489 - 20 Apr 2024
Cited by 1 | Viewed by 950
Abstract
Local differential privacy algorithms combined with deep generative models can enhance secure medical image sharing among researchers in the public domain without central administrators; however, these images were limited to the generation of low-resolution images, which are very insufficient for diagnosis by medical [...] Read more.
Local differential privacy algorithms combined with deep generative models can enhance secure medical image sharing among researchers in the public domain without central administrators; however, these images were limited to the generation of low-resolution images, which are very insufficient for diagnosis by medical doctors. To enhance the performance of deep generative models so that they can generate high-resolution medical images, we propose a large-scale diffusion model that can, for the first time, unconditionally generate high-resolution (256×256×256) volumetric medical images (head magnetic resonance images). This diffusion model has 19 billion parameters, but to make it easy to train it, we temporally divided the model into 200 submodels, each of which has 95 million parameters. Moreover, on the basis of this new diffusion model, we propose another formulation of image anonymization with which the processed images can satisfy provable Gaussian local differential privacy and with which we can generate images semantically different from the original image but belonging to the same class. We believe that the formulation of this new diffusion model and the implementation of local differential privacy algorithms combined with the diffusion models can contribute to the secure sharing of practical images upstream of data processing. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

13 pages, 2116 KiB  
Article
Enhancing Acute Bilirubin Encephalopathy Diagnosis with Multi-Modal MRI: A Deep Learning Approach
by Huan Zhang and Shunren Xia
Appl. Sci. 2024, 14(6), 2464; https://doi.org/10.3390/app14062464 - 14 Mar 2024
Viewed by 844
Abstract
Background: Acute Bilirubin Encephalopathy (ABE) is a major cause of infant mortality and disability, making early detection and treatment essential to prevent further progression and complications. Methods: To enhance the diagnostic capabilities of multi-modal Magnetic Resonance Imaging (MRI) for ABE, we proposed a [...] Read more.
Background: Acute Bilirubin Encephalopathy (ABE) is a major cause of infant mortality and disability, making early detection and treatment essential to prevent further progression and complications. Methods: To enhance the diagnostic capabilities of multi-modal Magnetic Resonance Imaging (MRI) for ABE, we proposed a deep learning model integrating an attention module (AM) with a central network (CentralNet). This model was tested on MRI data from 145 newborns diagnosed with ABE and 140 non-ABE newborns, utilizing both T1-weighted and T2-weighted images. Results: The findings indicated the following: (1) In single-modality experiments, the inclusion of AM significantly improved all the performance metrics compared to the models without AM. Specifically, for T1-weighted MRI, the accuracy was 0.639 ± 0.04, AUC was 0.682 ± 0.037, and sensitivity was 0.688 ± 0.09. For the T2-weighted images, the accuracy was 0.738 ± 0.039 and the AUC was 0.796 ± 0.025. (2) In multi-modal experiments, using T1 + T2 images, our model achieved the best accuracy of 0.845 ± 0.018, AUC of 0.913 ± 0.02, and sensitivity of 0.954 ± 0.069, compared to models without an AM and CentralNet. The specificity remained relatively stable, while the precision and F1 scores significantly increased, reaching 0.792 ± 0.048 and 0.862 ± 0.017, respectively. Conclusions: This study emphasizes the effectiveness of combining attention modules with CentralNet, significantly enhancing the accuracy of multi-modal MRI in classifying ABE. It presents a new perspective and possibility for the clinical application of multi-modal MRI imaging in the diagnosis of ABE. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

16 pages, 5104 KiB  
Article
A Virtual Staining Method Based on Self-Supervised GAN for Fourier Ptychographic Microscopy Colorful Imaging
by Yan Wang, Nan Guan, Jie Li and Xiaoli Wang
Appl. Sci. 2024, 14(4), 1662; https://doi.org/10.3390/app14041662 - 19 Feb 2024
Viewed by 1186
Abstract
Fourier ptychographic microscopy (FPM) is a computational imaging technology that has endless vitality and application potential in digital pathology. Colored pathological image analysis is the foundation of clinical diagnosis, basic research, and most biomedical problems. However, the current colorful FPM reconstruction methods are [...] Read more.
Fourier ptychographic microscopy (FPM) is a computational imaging technology that has endless vitality and application potential in digital pathology. Colored pathological image analysis is the foundation of clinical diagnosis, basic research, and most biomedical problems. However, the current colorful FPM reconstruction methods are time-inefficient, resulting in poor image quality due to optical interference and reconstruction errors. This paper combines coloring and FPM to propose a self-supervised generative adversarial network (GAN) for FPM color reconstruction. We design a generator based on the efficient channel residual (ECR) block to adaptively obtain efficient cross-channel interaction information in a lightweight manner, and we introduce content-consistency loss to learn the high-frequency information of the image and improve the image quality of the staining. Furthermore, the effectiveness of our proposed method is demonstrated through objective indicators and visual evaluations. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

26 pages, 1181 KiB  
Article
Advanced Medical Image Segmentation Enhancement: A Particle-Swarm-Optimization-Based Histogram Equalization Approach
by Shoffan Saifullah and Rafał Dreżewski
Appl. Sci. 2024, 14(2), 923; https://doi.org/10.3390/app14020923 - 22 Jan 2024
Cited by 3 | Viewed by 1692
Abstract
Accurate medical image segmentation is paramount for precise diagnosis and treatment in modern healthcare. This research presents a comprehensive study of the efficacy of particle swarm optimization (PSO) combined with histogram equalization (HE) preprocessing for medical image segmentation, focusing on lung CT scan [...] Read more.
Accurate medical image segmentation is paramount for precise diagnosis and treatment in modern healthcare. This research presents a comprehensive study of the efficacy of particle swarm optimization (PSO) combined with histogram equalization (HE) preprocessing for medical image segmentation, focusing on lung CT scan and chest X-ray datasets. Best-cost values reveal the PSO algorithm’s performance, with HE preprocessing demonstrating significant stabilization and enhanced convergence, particularly for complex lung CT scan images. Evaluation metrics, including accuracy, precision, recall, F1-score/Dice, specificity, and Jaccard, show substantial improvements with HE preprocessing, emphasizing its impact on segmentation accuracy. Comparative analyses against alternative methods, such as Otsu, Watershed, and K-means, confirm the competitiveness of the PSO-HE approach, especially for chest X-ray images. The study also underscores the positive influence of preprocessing on image clarity and precision. These findings highlight the promise of the PSO-HE approach for advancing the accuracy and reliability of medical image segmentation and pave the way for further research and method integration to enhance this critical healthcare application. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

54 pages, 10708 KiB  
Article
The Performance of a Lip-Sync Imagery Model, New Combinations of Signals, a Supplemental Bond Graph Classifier, and Deep Formula Detection as an Extraction and Root Classifier for Electroencephalograms and Brain–Computer Interfaces
by Ahmad Naebi and Zuren Feng
Appl. Sci. 2023, 13(21), 11787; https://doi.org/10.3390/app132111787 - 27 Oct 2023
Viewed by 1376
Abstract
Many current brain–computer interface (BCI) applications depend on the quick processing of brain signals. Most researchers strive to create new methods for future implementation and enhance existing models to discover an optimal feature set that can operate independently. This study focuses on four [...] Read more.
Many current brain–computer interface (BCI) applications depend on the quick processing of brain signals. Most researchers strive to create new methods for future implementation and enhance existing models to discover an optimal feature set that can operate independently. This study focuses on four key concepts that will be used to complete future works. The first concept is related to potential future communication models, whereas the others aim to enhance previous models or methodologies. The four concepts are as follows. First, we suggest a new communication imagery model as a substitute for a speech imager that relies on a mental task approach. As speech imagery is intricate, one cannot imagine the sounds of every character in every language. Our study proposes a new mental task model for lip-sync imagery that can be employed in all languages. Any character in any language can be used with this mental task model. In this study, we utilized two lip-sync movements to indicate two sounds, characters, or letters. Second, we considered innovative hybrid signals. Choosing an unsuitable frequency range can lead to ineffective feature extractions. Therefore, the selection of an appropriate frequency range is crucial for processing. The ultimate goal of this method is to accurately discover distinct frequencies of brain imagery activities. The restricted frequency range combination presents an initial proposal for generating fragmented, continuous frequencies. The first model assesses two 4 Hz intervals as filter banks. The primary objective is to discover new combinations of signals at 8 Hz by selecting filter banks with a 4 Hz scale from the frequency range of 4 Hz to 40 Hz. This approach facilitates the acquisition of efficient and clearly defined features by reducing similar patterns and enhancing distinctive patterns of brain activity. Third, we introduce a new linear bond graph classifier as a supplement to a linear support vector machine (SVM) when handling noisy data. The performance of the linear support vector machine (SVM) significantly declines under high-noise conditions. To complement the linear support vector machine (SVM) in noisy-data situations, we introduce a new linear bond graph classifier. Fourth, this paper presents a deep-learning model for formula recognition that converts the first-layer data into a formula extraction model. The primary goal is to decrease the noise in the formula coefficients of the subsequent layers. The output of the final layer comprises coefficients chosen by different functions at various levels. The classifier then extracts the root interval for each formula, and a diagnosis is established based on these intervals. The final goal of the last idea is to explain the main brain imagery activity formula using a combination formula for similar and distinctive brain imagery activities. The results of implementing all of the proposed methods are reported. The results range between 55% and 98%. The lowest result is 55% for the deep detection formula, and the highest result is 98% for new combinations of signals. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

12 pages, 366 KiB  
Article
Federated Learning for Computer-Aided Diagnosis of Glaucoma Using Retinal Fundus Images
by Telmo Baptista, Carlos Soares, Tiago Oliveira and Filipe Soares
Appl. Sci. 2023, 13(21), 11620; https://doi.org/10.3390/app132111620 - 24 Oct 2023
Cited by 1 | Viewed by 1233
Abstract
Deep learning approaches require a large amount of data to be transferred to centralized entities. However, this is often not a feasible option in healthcare, as it raises privacy concerns over sharing sensitive information. Federated Learning (FL) aims to address this issue by [...] Read more.
Deep learning approaches require a large amount of data to be transferred to centralized entities. However, this is often not a feasible option in healthcare, as it raises privacy concerns over sharing sensitive information. Federated Learning (FL) aims to address this issue by allowing machine learning without transferring the data to a centralized entity. FL has shown great potential to ensure privacy in digital healthcare while maintaining performance. Despite this, there is a lack of research on the impact of different types of data heterogeneity on the results. In this study, we research the robustness of various FL strategies on different data distributions and data quality for glaucoma diagnosis using retinal fundus images. We use RetinaQualEvaluator to generate quality labels for the datasets and then a data distributor to achieve our desired distributions. Finally, we evaluate the performance of the different strategies on local data and an independent test dataset. We observe that federated learning shows the potential to enable high-performance models without compromising sensitive data. Furthermore, we infer that FedProx is more suitable to scenarios where the distributions and quality of the data of the participating clients is diverse with less communication cost. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

10 pages, 2001 KiB  
Article
Effective Human Motor Imagery Recognition via Segment Pool Based on One-Dimensional Convolutional Neural Network with Bidirectional Recurrent Attention Unit Network
by Huawen Hu, Chenxi Yue, Enze Shi, Sigang Yu, Yanqing Kang, Jinru Wu, Jiaqi Wang and Shu Zhang
Appl. Sci. 2023, 13(16), 9233; https://doi.org/10.3390/app13169233 - 14 Aug 2023
Viewed by 1053
Abstract
Brain–computer interface (BCI) technology enables humans to interact with computers by collecting and decoding electroencephalogram (EEG) from the brain. For practical BCIs based on EEG, accurate recognition is crucial. However, existing methods often struggle to achieve a balance between accuracy and complexity. To [...] Read more.
Brain–computer interface (BCI) technology enables humans to interact with computers by collecting and decoding electroencephalogram (EEG) from the brain. For practical BCIs based on EEG, accurate recognition is crucial. However, existing methods often struggle to achieve a balance between accuracy and complexity. To overcome these challenges, we propose 1D convolutional neural networks with bidirectional recurrent attention unit network (1DCNN-BiRAU) based on a random segment recombination strategy (segment pool, SegPool). It has three main contributions. First, SegPool is proposed to increase training data diversity and reduce the impact of a single splicing method on model performance across different tasks. Second, it employs multiple 1D CNNs, including local and global models, to extract channel information with simplicity and efficiency. Third, BiRAU is introduced to learn temporal information and identify key features in time-series data, using forward–backward networks and an attention gate in the RAU. The experiments show that our model is effective and robust, achieving accuracy of 99.47% and 91.21% in binary classification at the individual and group levels, and 90.90% and 92.18% in four-category classification. Our model demonstrates promising results for recognizing human motor imagery and has the potential to be applied in practical scenarios such as brain–computer interfaces and neurological disorder diagnosis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

Back to TopTop