Recent Advance of Machine Learning in Biomedical Image Analysis

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biomedical Engineering and Biomaterials".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 13328

Special Issue Editors


E-Mail Website
Guest Editor
The Department of Artificial Intelligence, Xiamen University, Xiamen, China
Interests: computer vision; traffic video analytics; medical image analysis
College of Computer and Data Science, Fuzhou University, Fuzhou, China
Interests: machine learning; medical image analysis; computer vision

Special Issue Information

Dear Colleagues,

In recent years, the field of biomedical image analysis has gained significant importance due to the growing availability of medical imaging data and the need for accurate and efficient analysis. Machine learning (ML) techniques have emerged as a promising solution to address these challenges, and have been widely used in various applications, such as computer-aided diagnosis, medical image segmentation, registration, retrieval, classification, etc. The leading conferences and journals of the medical image analysis community, such as MICCAI and IEEE TMI, have recently published several advanced influential research articles using advanced ML techniques, such as transformer and auto ML. With an increase in data volume, advanced ML techniques are becoming increasingly important in biomedical image analysis.

This Special Issue aims to present the latest research developments in machine learning applied to biomedical image analysis. The topics of interest include but are not limited to:

  • Semantic segmentation of medical images;
  • Computer-aided detection and diagnosis;
  • Learning from weak or noisy annotations;
  • Transfer learning and domain adaptation;
  • Uncertainty estimation for medical diagnosis;
  • Unsupervised deep learning and representation learning;
  • Transformer-based medical image analysis method;
  • Deep learning applications in radiology, pathology, endoscopy, dermatology, ophthalmology, and beyond.

Dr. Zhiming Luo
Dr. Sheng Lian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 3299 KiB  
Article
Applying Self-Supervised Learning to Image Quality Assessment in Chest CT Imaging
by Eléonore Pouget and Véronique Dedieu
Bioengineering 2024, 11(4), 335; https://doi.org/10.3390/bioengineering11040335 - 29 Mar 2024
Viewed by 644
Abstract
Many new reconstruction techniques have been deployed to allow low-dose CT examinations. Such reconstruction techniques exhibit nonlinear properties, which strengthen the need for a task-based measure of image quality. The Hotelling observer (HO) is the optimal linear observer and provides a lower bound [...] Read more.
Many new reconstruction techniques have been deployed to allow low-dose CT examinations. Such reconstruction techniques exhibit nonlinear properties, which strengthen the need for a task-based measure of image quality. The Hotelling observer (HO) is the optimal linear observer and provides a lower bound of the Bayesian ideal observer detection performance. However, its computational complexity impedes its widespread practical usage. To address this issue, we proposed a self-supervised learning (SSL)-based model observer to provide accurate estimates of HO performance in very low-dose chest CT images. Our approach involved a two-stage model combining a convolutional denoising auto-encoder (CDAE) for feature extraction and dimensionality reduction and a support vector machine for classification. To evaluate this approach, we conducted signal detection tasks employing chest CT images with different noise structures generated by computer-based simulations. We compared this approach with two supervised learning-based methods: a single-layer neural network (SLNN) and a convolutional neural network (CNN). The results showed that the CDAE-based model was able to achieve similar detection performance to the HO. In addition, it outperformed both SLNN and CNN when a reduced number of training images was considered. The proposed approach holds promise for optimizing low-dose CT protocols across scanner platforms. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Graphical abstract

15 pages, 3001 KiB  
Article
Development of End-to-End Artificial Intelligence Models for Surgical Planning in Transforaminal Lumbar Interbody Fusion
by Anh Tuan Bui, Hieu Le, Tung Thanh Hoang, Giam Minh Trinh, Hao-Chiang Shao, Pei-I Tsai, Kuan-Jen Chen, Kevin Li-Chun Hsieh, E-Wen Huang, Ching-Chi Hsu, Mathew Mathew, Ching-Yu Lee, Po-Yao Wang, Tsung-Jen Huang and Meng-Huang Wu
Bioengineering 2024, 11(2), 164; https://doi.org/10.3390/bioengineering11020164 - 8 Feb 2024
Viewed by 1427
Abstract
Transforaminal lumbar interbody fusion (TLIF) is a commonly used technique for treating lumbar degenerative diseases. In this study, we developed a fully computer-supported pipeline to predict both the cage height and the degree of lumbar lordosis subtraction from the pelvic incidence (PI-LL) after [...] Read more.
Transforaminal lumbar interbody fusion (TLIF) is a commonly used technique for treating lumbar degenerative diseases. In this study, we developed a fully computer-supported pipeline to predict both the cage height and the degree of lumbar lordosis subtraction from the pelvic incidence (PI-LL) after TLIF surgery, utilizing preoperative X-ray images. The automated pipeline comprised two primary stages. First, the pretrained BiLuNet deep learning model was employed to extract essential features from X-ray images. Subsequently, five machine learning algorithms were trained using a five-fold cross-validation technique on a dataset of 311 patients to identify the optimal models to predict interbody cage height and postoperative PI-LL. LASSO regression and support vector regression demonstrated superior performance in predicting interbody cage height and postoperative PI-LL, respectively. For cage height prediction, the root mean square error (RMSE) was calculated as 1.01, and the model achieved the highest accuracy at a height of 12 mm, with exact prediction achieved in 54.43% (43/79) of cases. In most of the remaining cases, the prediction error of the model was within 1 mm. Additionally, the model demonstrated satisfactory performance in predicting PI-LL, with an RMSE of 5.19 and an accuracy of 0.81 for PI-LL stratification. In conclusion, our results indicate that machine learning models can reliably predict interbody cage height and postoperative PI-LL. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Figure 1

17 pages, 12907 KiB  
Article
High-Speed and Accurate Diagnosis of Gastrointestinal Disease: Learning on Endoscopy Images Using Lightweight Transformer with Local Feature Attention
by Shibin Wu, Ruxin Zhang, Jiayi Yan, Chengquan Li, Qicai Liu, Liyang Wang and Haoqian Wang
Bioengineering 2023, 10(12), 1416; https://doi.org/10.3390/bioengineering10121416 - 13 Dec 2023
Viewed by 1130
Abstract
In response to the pressing need for robust disease diagnosis from gastrointestinal tract (GIT) endoscopic images, we proposed FLATer, a fast, lightweight, and highly accurate transformer-based model. FLATer consists of a residual block, a vision transformer module, and a spatial attention block, which [...] Read more.
In response to the pressing need for robust disease diagnosis from gastrointestinal tract (GIT) endoscopic images, we proposed FLATer, a fast, lightweight, and highly accurate transformer-based model. FLATer consists of a residual block, a vision transformer module, and a spatial attention block, which concurrently focuses on local features and global attention. It can leverage the capabilities of both convolutional neural networks (CNNs) and vision transformers (ViT). We decomposed the classification of endoscopic images into two subtasks: a binary classification to discern between normal and pathological images and a further multi-class classification to categorize images into specific diseases, namely ulcerative colitis, polyps, and esophagitis. FLATer has exhibited exceptional prowess in these tasks, achieving 96.4% accuracy in binary classification and 99.7% accuracy in ternary classification, surpassing most existing models. Notably, FLATer could maintain impressive performance when trained from scratch, underscoring its robustness. In addition to the high precision, FLATer boasted remarkable efficiency, reaching a notable throughput of 16.4k images per second, which positions FLATer as a compelling candidate for rapid disease identification in clinical practice. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Figure 1

20 pages, 5113 KiB  
Article
A Synthesizing Semantic Characteristics Lung Nodules Classification Method Based on 3D Convolutional Neural Network
by Yanan Dong, Xiaoqin Li, Yang Yang, Meng Wang and Bin Gao
Bioengineering 2023, 10(11), 1245; https://doi.org/10.3390/bioengineering10111245 - 25 Oct 2023
Cited by 1 | Viewed by 1239
Abstract
Early detection is crucial for the survival and recovery of lung cancer patients. Computer-aided diagnosis system can assist in the early diagnosis of lung cancer by providing decision support. While deep learning methods are increasingly being applied to tasks such as CAD (Computer-aided [...] Read more.
Early detection is crucial for the survival and recovery of lung cancer patients. Computer-aided diagnosis system can assist in the early diagnosis of lung cancer by providing decision support. While deep learning methods are increasingly being applied to tasks such as CAD (Computer-aided diagnosis system), these models lack interpretability. In this paper, we propose a convolutional neural network model that combines semantic characteristics (SCCNN) to predict whether a given pulmonary nodule is malignant. The model synthesizes the advantages of multi-view, multi-task and attention modules in order to fully simulate the actual diagnostic process of radiologists. The 3D (three dimensional) multi-view samples of lung nodules are extracted by spatial sampling method. Meanwhile, semantic characteristics commonly used in radiology reports are used as an auxiliary task and serve to explain how the model interprets. The introduction of the attention module in the feature fusion stage improves the classification of lung nodules as benign or malignant. Our experimental results using the LIDC-IDRI (Lung Image Database Consortium and Image Database Resource Initiative) show that this study achieves 95.45% accuracy and 97.26% ROC (Receiver Operating Characteristic) curve area. The results show that the method we proposed not only realize the classification of benign and malignant compared to standard 3D CNN approaches but can also be used to intuitively explain how the model makes predictions, which can assist clinical diagnosis. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Figure 1

16 pages, 5641 KiB  
Article
A Soft-Reference Breast Ultrasound Image Quality Assessment Method That Considers the Local Lesion Area
by Ziwen Wang, Yuxin Song, Baoliang Zhao, Zhaoming Zhong, Liang Yao, Faqin Lv, Bing Li and Ying Hu
Bioengineering 2023, 10(8), 940; https://doi.org/10.3390/bioengineering10080940 - 7 Aug 2023
Cited by 1 | Viewed by 1025
Abstract
The quality of breast ultrasound images has a significant impact on the accuracy of disease diagnosis. Existing image quality assessment (IQA) methods usually use pixel-level feature statistical methods or end-to-end deep learning methods, which focus on the global image quality but ignore the [...] Read more.
The quality of breast ultrasound images has a significant impact on the accuracy of disease diagnosis. Existing image quality assessment (IQA) methods usually use pixel-level feature statistical methods or end-to-end deep learning methods, which focus on the global image quality but ignore the image quality of the lesion region. However, in clinical practice, doctors’ evaluation of ultrasound image quality relies more on the local area of the lesion, which determines the diagnostic value of ultrasound images. In this study, a global–local integrated IQA framework for breast ultrasound images was proposed to learn doctors’ clinical evaluation standards. In this study, 1285 breast ultrasound images were collected and scored by experienced doctors. After being classified as either images with lesions or images without lesions, they were evaluated using soft-reference IQA or bilinear CNN IQA, respectively. Experiments showed that for ultrasound images with lesions, our proposed soft-reference IQA achieved PLCC 0.8418 with doctors’ annotation, while the existing end-to-end deep learning method that did not consider the local lesion features only achieved PLCC 0.6606. Due to the accuracy improvement for the images with lesions, our proposed global–local integrated IQA framework had better performance in the IQA task than the existing end-to-end deep learning method, with PLCC improving from 0.8306 to 0.8851. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Figure 1

20 pages, 2322 KiB  
Article
Semi-Supervised Medical Image Segmentation with Co-Distribution Alignment
by Tao Wang, Zhongzheng Huang, Jiawei Wu, Yuanzheng Cai and Zuoyong Li
Bioengineering 2023, 10(7), 869; https://doi.org/10.3390/bioengineering10070869 - 21 Jul 2023
Viewed by 1600
Abstract
Medical image segmentation has made significant progress when a large amount of labeled data are available. However, annotating medical image segmentation datasets is expensive due to the requirement of professional skills. Additionally, classes are often unevenly distributed in medical images, which severely affects [...] Read more.
Medical image segmentation has made significant progress when a large amount of labeled data are available. However, annotating medical image segmentation datasets is expensive due to the requirement of professional skills. Additionally, classes are often unevenly distributed in medical images, which severely affects the classification performance on minority classes. To address these problems, this paper proposes Co-Distribution Alignment (Co-DA) for semi-supervised medical image segmentation. Specifically, Co-DA aligns marginal predictions on unlabeled data to marginal predictions on labeled data in a class-wise manner with two differently initialized models before using the pseudo-labels generated by one model to supervise the other. Besides, we design an over-expectation cross-entropy loss for filtering the unlabeled pixels to reduce noise in their pseudo-labels. Quantitative and qualitative experiments on three public datasets demonstrate that the proposed approach outperforms existing state-of-the-art semi-supervised medical image segmentation methods on both the 2D CaDIS dataset and the 3D LGE-MRI and ACDC datasets, achieving an mIoU of 0.8515 with only 24% labeled data on CaDIS, and a Dice score of 0.8824 and 0.8773 with only 20% data on LGE-MRI and ACDC, respectively. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Figure 1

19 pages, 4807 KiB  
Article
Semantic Segmentation of Gastric Polyps in Endoscopic Images Based on Convolutional Neural Networks and an Integrated Evaluation Approach
by Tao Yan, Ye Ying Qin, Pak Kin Wong, Hao Ren, Chi Hong Wong, Liang Yao, Ying Hu, Cheok I Chan, Shan Gao and Pui Pun Chan
Bioengineering 2023, 10(7), 806; https://doi.org/10.3390/bioengineering10070806 - 5 Jul 2023
Cited by 3 | Viewed by 1609
Abstract
Convolutional neural networks (CNNs) have received increased attention in endoscopic images due to their outstanding advantages. Clinically, some gastric polyps are related to gastric cancer, and accurate identification and timely removal are critical. CNN-based semantic segmentation can delineate each polyp region precisely, which [...] Read more.
Convolutional neural networks (CNNs) have received increased attention in endoscopic images due to their outstanding advantages. Clinically, some gastric polyps are related to gastric cancer, and accurate identification and timely removal are critical. CNN-based semantic segmentation can delineate each polyp region precisely, which is beneficial to endoscopists in the diagnosis and treatment of gastric polyps. At present, just a few studies have used CNN to automatically diagnose gastric polyps, and studies on their semantic segmentation are lacking. Therefore, we contribute pioneering research on gastric polyp segmentation in endoscopic images based on CNN. Seven classical semantic segmentation models, including U-Net, UNet++, DeepLabv3, DeepLabv3+, Pyramid Attention Network (PAN), LinkNet, and Muti-scale Attention Net (MA-Net), with the encoders of ResNet50, MobineNetV2, or EfficientNet-B1, are constructed and compared based on the collected dataset. The integrated evaluation approach to ascertaining the optimal CNN model combining both subjective considerations and objective information is proposed since the selection from several CNN models is difficult in a complex problem with conflicting multiple criteria. UNet++ with the MobineNet v2 encoder obtains the best scores in the proposed integrated evaluation method and is selected to build the automated polyp-segmentation system. This study discovered that the semantic segmentation model has a high clinical value in the diagnosis of gastric polyps, and the integrated evaluation approach can provide an impartial and objective tool for the selection of numerous models. Our study can further advance the development of endoscopic gastrointestinal disease identification techniques, and the proposed evaluation technique has implications for mathematical model-based selection methods for clinical technologies. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Graphical abstract

22 pages, 5975 KiB  
Article
Radiomics-Based Quality Control System for Automatic Cardiac Segmentation: A Feasibility Study
by Qiming Liu, Qifan Lu, Yezi Chai, Zhengyu Tao, Qizhen Wu, Meng Jiang and Jun Pu
Bioengineering 2023, 10(7), 791; https://doi.org/10.3390/bioengineering10070791 - 1 Jul 2023
Viewed by 1350
Abstract
Purpose: In the past decade, there has been a rapid increase in the development of automatic cardiac segmentation methods. However, the automatic quality control (QC) of these segmentation methods has received less attention. This study aims to address this gap by developing an [...] Read more.
Purpose: In the past decade, there has been a rapid increase in the development of automatic cardiac segmentation methods. However, the automatic quality control (QC) of these segmentation methods has received less attention. This study aims to address this gap by developing an automatic pipeline that incorporates DL-based cardiac segmentation and radiomics-based quality control. Methods: In the DL-based localization and segmentation part, the entire heart was first located and cropped. Then, the cropped images were further utilized for the segmentation of the right ventricle cavity (RVC), myocardium (MYO), and left ventricle cavity (LVC). As for the radiomics-based QC part, a training radiomics dataset was created with segmentation tasks of various quality. This dataset was used for feature extraction, selection, and QC model development. The model performance was then evaluated using both internal and external testing datasets. Results: In the internal testing dataset, the segmentation model demonstrated a great performance with a dice similarity coefficient (DSC) of 0.954 for whole heart segmentations. Images were then appropriately cropped to 160 × 160 pixels. The models also performed well for cardiac substructure segmentations. The DSC values were 0.863, 0.872, and 0.940 for RVC, MYO, and LVC for 2D masks and 0.928, 0.886, and 0.962 for RVC, MYO, and LVC for 3D masks with an attention-UNet. After feature selection with the radiomics dataset, we developed a series of models to predict the automatic segmentation quality and its DSC value for the RVC, MYO, and LVC structures. The mean absolute values for our best prediction models were 0.060, 0.032, and 0.021 for 2D segmentations and 0.027, 0.017, and 0.011 for 3D segmentations, respectively. Additionally, the radiomics-based classification models demonstrated a high negative detection rate of >0.85 in all 2D groups. In the external dataset, models showed similar results. Conclusions: We developed a pipeline including cardiac substructure segmentation and QC at both the slice (2D) and subject (3D) levels. Our results demonstrate that the radiomics method possesses great potential for the automatic QC of cardiac segmentation. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Figure 1

15 pages, 2517 KiB  
Article
An Actinic Keratosis Auxiliary Diagnosis Method Based on an Enhanced MobileNet Model
by Shiyang Li, Chengquan Li, Qicai Liu, Yilin Pei, Liyang Wang and Zhu Shen
Bioengineering 2023, 10(6), 732; https://doi.org/10.3390/bioengineering10060732 - 19 Jun 2023
Cited by 1 | Viewed by 1631
Abstract
Actinic keratosis (AK) is a common precancerous skin lesion with significant harm, and it is often confused with non-actinic keratoses (NAK). At present, the diagnosis of AK mainly depends on clinical experience and histopathology. Due to the high difficulty of diagnosis and easy [...] Read more.
Actinic keratosis (AK) is a common precancerous skin lesion with significant harm, and it is often confused with non-actinic keratoses (NAK). At present, the diagnosis of AK mainly depends on clinical experience and histopathology. Due to the high difficulty of diagnosis and easy confusion with other diseases, this article aims to develop a convolutional neural network that can efficiently, accurately, and automatically diagnose AK. This article improves the MobileNet model and uses the AK and NAK images in the HAM10000 dataset for training and testing after data preprocessing, and we performed external independent testing using a separate dataset to validate our preprocessing approach and to demonstrate the performance and generalization capability of our model. It further compares common deep learning models in the field of skin diseases (including the original MobileNet, ResNet, GoogleNet, EfficientNet, and Xception). The results show that the improved MobileNet has achieved 0.9265 in accuracy and 0.97 in Area Under the ROC Curve (AUC), which is the best among the comparison models. At the same time, it has the shortest training time, and the total time of five-fold cross-validation on local devices only takes 821.7 s. Local experiments show that the method proposed in this article has high accuracy and stability in diagnosing AK. Our method will help doctors diagnose AK more efficiently and accurately, allowing patients to receive timely diagnosis and treatment. Full article
(This article belongs to the Special Issue Recent Advance of Machine Learning in Biomedical Image Analysis)
Show Figures

Figure 1

Back to TopTop