sensors-logo

Journal Browser

Journal Browser

Vision- and Image-Based Biomedical Diagnostics—2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: 30 October 2025 | Viewed by 8123

Special Issue Editors


E-Mail Website
Guest Editor
Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
Interests: computer vision; medical imaging; artificial intelligence; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Research Council Canada, 1200 Montréal Road, Ottawa, ON K1A 0R6, Canada
Interests: machine learning; computer vision; deep learning; signal processing; geometry processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Research Council of Canada, 222 College Street, Toronto, ON M5T 3J1, Canada
Interests: artificial intelligence; machine learning; deep learning; network analysis; computer vision; text analytics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent significant advances in biomedical imaging have led to not only significantly increased visual fidelity, but also new insights and understanding into diseases that were previously not possible to capture for supporting improved diagnostics and clinical decision support. In parallel, significant advances have been made in image analysis and artificial intelligence techniques for extracting critical visual information from biomedical imaging data, and leveraging this extracted information for predicting the presence of disease as well as the severity of disease to support clinical diagnosis and decision-making processes. The goal of this Special Issue is to promote recent technical advances in all relevant aspects, ranging from improved biomedical imaging systems for diagnosis to applications and techniques for image analysis and artificial intelligence algorithms for diagnostic prediction. The topics of interest include, but are not limited to, the following:

  • Biomedical imaging systems and techniques for diagnosis purposes;
  • Machine learning and AI algorithms for the diagnosis and prognosis of diseases;
  • Image analysis algorithms (segmentation, region-of-interest detection, feature extraction, etc.) for supporting the diagnosis and prognosis of diseases;
  • Intelligent systems for vision- and image-based biomedical diagnostics;
  • Multimodal fusion methods for combining information from different imaging systems and/or a mix of imaging and clinical metadata.

Prof. Dr. Alexander Wong
Dr. Pengcheng Xi
Dr. Ashkan Ebadi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical
  • medical
  • imaging
  • image
  • vision
  • diagnostics
  • prognosis
  • artificial intelligence
  • machine learning
  • imaging modalities
  • signal processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3815 KB  
Article
LMeRAN: Label Masking-Enhanced Residual Attention Network for Multi-Label Chest X-Ray Disease Aided Diagnosis
by Hongping Fu, Chao Song, Xiaolong Qu, Dongmei Li and Lei Zhang
Sensors 2025, 25(18), 5676; https://doi.org/10.3390/s25185676 - 11 Sep 2025
Viewed by 529
Abstract
Chest X-ray (CXR) imaging is essential for diagnosing thoracic diseases, and computer-aided diagnosis (CAD) systems have made substantial progress in automating the interpretation of CXR images. However, some existing methods often overemphasize local features while neglecting global context, limiting their ability to capture [...] Read more.
Chest X-ray (CXR) imaging is essential for diagnosing thoracic diseases, and computer-aided diagnosis (CAD) systems have made substantial progress in automating the interpretation of CXR images. However, some existing methods often overemphasize local features while neglecting global context, limiting their ability to capture the broader pathological landscape. Moreover, most methods fail to model label correlations, leading to insufficient utilization of prior knowledge. To address these limitations, we propose a novel multi-label CXR image classification framework, termed the Label Masking-enhanced Residual Attention Network (LMeRAN). Specifically, LMeRAN introduces an original label-specific residual attention to capture disease-relevant information effectively. By integrating multi-head self-attention with average pooling, the model dynamically assigns higher weights to critical lesion areas while retaining global contextual features. In addition, LMeRAN employs a label mask training strategy, enabling the model to learn complex label dependencies from partially available label information. Experiments conducted on the large-scale public dataset ChestX-ray14 demonstrate that LMeRAN achieves the highest mean AUC value of 0.825, resulting in an increase of 3.1% to 8.0% over several advanced baselines. To enhance interpretability, we also visualize the lesion regions relied upon by the model for classification, providing clearer insights into the model’s decision-making process. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)
Show Figures

Figure 1

18 pages, 3368 KB  
Article
Segmentation-Assisted Fusion-Based Classification for Automated CXR Image Analysis
by Shilu Kang, Dongfang Li, Jiaxin Xu, Aokun Mei and Hua Huo
Sensors 2025, 25(15), 4580; https://doi.org/10.3390/s25154580 - 24 Jul 2025
Viewed by 701
Abstract
Accurate classification of chest X-ray (CXR) images is crucial for diagnosing lung diseases in medical imaging. Existing deep learning models for CXR image classification face challenges in distinguishing non-lung features. In this work, we propose a new segmentation-assisted fusion-based classification method. The method [...] Read more.
Accurate classification of chest X-ray (CXR) images is crucial for diagnosing lung diseases in medical imaging. Existing deep learning models for CXR image classification face challenges in distinguishing non-lung features. In this work, we propose a new segmentation-assisted fusion-based classification method. The method involves two stages: first, we use a lightweight segmentation model, Partial Convolutional Segmentation Network (PCSNet) designed based on an encoder–decoder architecture, to accurately obtain lung masks from CXR images. Then, a fusion of the masked CXR image with the original image enables classification using the improved lightweight ShuffleNetV2 model. The proposed method is trained and evaluated on segmentation datasets including the Montgomery County Dataset (MC) and Shenzhen Hospital Dataset (SH), and classification datasets such as Chest X-Ray Images for Pneumonia (CXIP) and COVIDx. Compared with seven segmentation models (U-Net, Attention-Net, SegNet, FPNNet, DANet, DMNet, and SETR), five classification models (ResNet34, ResNet50, DenseNet121, Swin-Transforms, and ShuffleNetV2), and state-of-the-art methods, our PCSNet model achieved high segmentation performance on CXR images. Compared to the state-of-the-art Attention-Net model, the accuracy of PCSNet increased by 0.19% (98.94% vs. 98.75%), and the boundary accuracy improved by 0.3% (97.86% vs. 97.56%), while requiring 62% fewer parameters. For pneumonia classification using the CXIP dataset, the proposed strategy outperforms the current best model by 0.14% in accuracy (98.55% vs. 98.41%). For COVID-19 classification with the COVIDx dataset, the model reached an accuracy of 97.50%, the absolute improvement in accuracy compared to CovXNet was 0.1%, and clinical metrics demonstrate more significant gains: specificity increased from 94.7% to 99.5%. These results highlight the model’s effectiveness in medical image analysis, demonstrating clinically meaningful improvements over state-of-the-art approaches. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)
Show Figures

Figure 1

21 pages, 8002 KB  
Article
Simulating Pulp Vitality Measurements via Digital Optical Twins: Influence of Dental Components on Spectral Transmission
by David Hevisov, Thomas Peter Ertl and Alwin Kienle
Sensors 2025, 25(10), 3217; https://doi.org/10.3390/s25103217 - 20 May 2025
Viewed by 636
Abstract
Optical diagnostic techniques represent an attractive complement to conventional pulp vitality tests, as they can provide direct information about the vascular status of the pulp. However, the complex, multi-layered structure of a tooth significantly influences the detected signal and, ultimately, the diagnostic decision. [...] Read more.
Optical diagnostic techniques represent an attractive complement to conventional pulp vitality tests, as they can provide direct information about the vascular status of the pulp. However, the complex, multi-layered structure of a tooth significantly influences the detected signal and, ultimately, the diagnostic decision. Despite this, the impact of the various dental components on light propagation within the tooth, particularly in the context of diagnostic applications, remains insufficiently studied. To help bridge this gap and potentially enhance diagnostic accuracy, this study employs digital optical twins based on the Monte Carlo method. Using incisor and molar models as examples, the influence of tooth and pulp geometry, blood concentration, and pulp composition, such as the possible presence of pus, on spectrally resolved transmission signals is demonstrated. Furthermore, it is shown that gingival blood absorption can significantly overlay the pulpal measurement signal, posing a substantial risk of misdiagnosis. Strategies such as shifting the illumination and detection axes, as well as time-gated detection, are explored as potential approaches to suppress interfering signals, particularly those originating from the gingiva. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)
Show Figures

Figure 1

17 pages, 7271 KB  
Article
A Multitask CNN for Near-Infrared Probe: Enhanced Real-Time Breast Cancer Imaging
by Maryam Momtahen and Farid Golnaraghi
Sensors 2025, 25(8), 2349; https://doi.org/10.3390/s25082349 - 8 Apr 2025
Viewed by 784
Abstract
The early detection of breast cancer, particularly in dense breast tissues, faces significant challenges with traditional imaging techniques such as mammography. This study utilizes a Near-infrared Scan (NIRscan) probe and an advanced convolutional neural network (CNN) model to enhance tumor localization accuracy and [...] Read more.
The early detection of breast cancer, particularly in dense breast tissues, faces significant challenges with traditional imaging techniques such as mammography. This study utilizes a Near-infrared Scan (NIRscan) probe and an advanced convolutional neural network (CNN) model to enhance tumor localization accuracy and efficiency. CNN processed data from 133 breast phantoms into 266 samples using data augmentation techniques, such as mirroring. The model significantly improved image reconstruction, achieving an RMSE of 0.0624, MAE of 0.0360, R2 of 0.9704, and Fuzzy Jaccard Index of 0.9121. Subsequently, we introduced a multitask CNN that reconstructs images and classifies them based on depth, length, and health status, further enhancing its diagnostic capabilities. This multitasking approach leverages the robust feature extraction capabilities of CNNs to perform complex tasks simultaneously, thereby improving the model’s efficiency and accuracy. It achieved exemplary classification accuracies in depth (100%), length (92.86%), and health status, with a perfect F1 Score. These results highlight the promise of NIRscan technology, in combination with a multitask CNN model, as a supportive tool for improving real-time breast cancer screening and diagnostic workflows. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)
Show Figures

Figure 1

12 pages, 5540 KB  
Article
Automatic Image Registration Provides Superior Accuracy Compared with Surface Matching in Cranial Navigation
by Henrik Frisk, Margret Jensdottir, Luisa Coronado, Markus Conrad, Susanne Hager, Lisa Arvidsson, Jiri Bartek, Jr., Gustav Burström, Victor Gabriel El-Hajj, Erik Edström, Adrian Elmi-Terander and Oscar Persson
Sensors 2024, 24(22), 7341; https://doi.org/10.3390/s24227341 - 18 Nov 2024
Cited by 1 | Viewed by 2311
Abstract
Objective: The precision of neuronavigation systems relies on the correct registration of the patient’s position in space and aligning it with radiological 3D imaging data. Registration is usually performed by the acquisition of anatomical landmarks or surface matching based on facial features. Another [...] Read more.
Objective: The precision of neuronavigation systems relies on the correct registration of the patient’s position in space and aligning it with radiological 3D imaging data. Registration is usually performed by the acquisition of anatomical landmarks or surface matching based on facial features. Another possibility is automatic image registration using intraoperative imaging. This could provide better accuracy, especially in rotated or prone positions where the other methods may be difficult to perform. The aim of this study was to validate automatic image registration (AIR) using intraoperative cone-beam computed tomography (CBCT) for cranial neurosurgical procedures and compare the registration accuracy to the traditional surface matching (SM) registration method based on preoperative MRI. The preservation of navigation accuracy throughout the surgery was also investigated. Methods: Adult patients undergoing intracranial tumor surgery were enrolled after consent. A standard SM registration was performed, and reference points were acquired. An AIR was then performed, and the same reference points were acquired again. Accuracy was calculated based on the referenced and acquired coordinates of the points for each registration method. The reference points were acquired before and after draping and at the end of the procedure to assess the persistency of accuracy. Results: In total, 22 patients were included. The mean accuracy was 6.6 ± 3.1 mm for SM registration and 1.0 ± 0.3 mm for AIR. The AIR was superior to the SM registration (p < 0.0001), with a mean improvement in accuracy of 5.58 mm (3.71–7.44 mm 99% CI). The mean accuracy for the AIR registration pre-drape was 1.0 ± 0.3 mm. The corresponding accuracies post-drape and post-resection were 2.9 ± 4.6 mm and 4.1 ± 4.9 mm, respectively. Although a loss of accuracy was identified between the preoperative and end-of-procedure measurements, there was no statistically significant decline during surgery. Conclusions: AIR for cranial neuronavigation consistently delivered greater accuracy than SM and should be considered the new gold standard for patient registration in cranial neuronavigation. If intraoperative imaging is a limited resource, AIR should be prioritized in rotated or prone position procedures, where the benefits are the greatest. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)
Show Figures

Figure 1

26 pages, 3672 KB  
Article
Development of a Cost-Efficient and Glaucoma-Specialized OD/OC Segmentation Model for Varying Clinical Scenarios
by Kai Liu and Jicong Zhang
Sensors 2024, 24(22), 7255; https://doi.org/10.3390/s24227255 - 13 Nov 2024
Viewed by 1000
Abstract
Most existing optic disc (OD) and cup (OC) segmentation models are biased to the dominant size and easy class (normal class), resulting in suboptimal performances on glaucoma-confirmed samples. Thus, these models are not optimal choices for assisting in tracking glaucoma progression and prognosis. [...] Read more.
Most existing optic disc (OD) and cup (OC) segmentation models are biased to the dominant size and easy class (normal class), resulting in suboptimal performances on glaucoma-confirmed samples. Thus, these models are not optimal choices for assisting in tracking glaucoma progression and prognosis. Moreover, fully supervised models employing annotated glaucoma samples can achieve superior performances, although restricted by the high cost of collecting and annotating the glaucoma samples. Therefore, in this paper, we are dedicated to developing a glaucoma-specialized model by exploiting low-cost annotated normal fundus images, simultaneously adapting various common scenarios in clinical practice. We employ a contrastive learning and domain adaptation-based model by exploiting shared knowledge from normal samples. To capture glaucoma-related features, we utilize a Gram matrix to encode style information and the domain adaptation strategy to encode domain information, followed by narrowing the style and domain gaps between normal and glaucoma samples by contrastive and adversarial learning, respectively. To validate the efficacy of our proposed model, we conducted experiments utilizing two public datasets to mimic various common scenarios. The results demonstrate the superior performance of our proposed model across multi-scenarios, showcasing its proficiency in both the segmentation- and glaucoma-related metrics. In summary, our study illustrates a concerted effort to target confirmed glaucoma samples, mitigating the inherent bias issue in most existing models. Moreover, we propose an annotation-efficient strategy that exploits low-cost, normal-labeled fundus samples, mitigating the economic- and labor-related burdens by employing a fully supervised strategy. Simultaneously, our approach demonstrates its adaptability across various scenarios, highlighting its potential utility in both assisting in the monitoring of glaucoma progression and assessing glaucoma prognosis. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)
Show Figures

Figure 1

9 pages, 16624 KB  
Article
Double-Condensing Attention Condenser: Leveraging Attention in Deep Learning to Detect Skin Cancer from Skin Lesion Images
by Chi-en Amy Tai, Elizabeth Janes, Chris Czarnecki and Alexander Wong
Sensors 2024, 24(22), 7231; https://doi.org/10.3390/s24227231 - 12 Nov 2024
Cited by 3 | Viewed by 1505
Abstract
Skin cancer is the most common type of cancer in the United States and is estimated to affect one in five Americans. Recent advances have demonstrated strong performance on skin cancer detection, as exemplified by state of the art performance in the SIIM-ISIC [...] Read more.
Skin cancer is the most common type of cancer in the United States and is estimated to affect one in five Americans. Recent advances have demonstrated strong performance on skin cancer detection, as exemplified by state of the art performance in the SIIM-ISIC Melanoma Classification Challenge; however, these solutions leverage ensembles of complex deep neural architectures requiring immense storage and computation costs, and therefore may not be tractable. A recent movement for TinyML applications is integrating Double-Condensing Attention Condensers (DC-AC) into a self-attention neural network backbone architecture to allow for faster and more efficient computation. This paper explores leveraging an efficient self-attention structure to detect skin cancer in skin lesion images and introduces a deep neural network design with DC-AC customized for skin cancer detection from skin lesion images. We demonstrate that our approach with only 1.6 million parameters and 0.32 GFLOPs achieves better performance compared to traditional architecture designs as it obtains an area under the ROC curve of 0.90 on the public ISIC 2020 test set and 0.89 on the private ISIC test set, over 0.13 above the best Cancer-Net SCa network architecture design. The final model is publicly available as a part of a global open-source initiative dedicated to accelerating advancement in machine learning to aid clinicians in the fight against cancer. Future work of this research includes iterating on the design of the selected network architecture and refining the approach to generalize to other forms of cancer. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)
Show Figures

Figure 1

Back to TopTop