sensors-logo

Journal Browser

Journal Browser

Advanced Deep Learning for Biomedical Sensing and Imaging

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 5 November 2024 | Viewed by 3169

Special Issue Editors


E-Mail Website
Guest Editor
Fraunhofer Centre for Applied Photonics, Glasgow G1 1RD, UK
Interests: deep learning; biomedical imaging analysis; fluorescence lifetime imaging microscopy; super resolution microscopy

E-Mail Website
Guest Editor
Key Laboratory of Ultra-Fast Photoelectric Diagnostics Technology, Xi'an Institute of Optics and Precision Mechanics, Xi'an 710119, China
Interests: fluorescence lifetime imaging; ultrafast imaging; time-of-flight 3D imaging

Special Issue Information

Dear Colleagues,

The emergence of deep learning (DL) has sparked revolutionary transformations in a broad spectrum of biomedical imaging techniques by offering a distinctive data-driven approach. Within only a few years, DL has achieved great success and significantly improved imaging performance beyond instrumental limitations for various imaging modalities, such as fluorescence imaging, fluorescence lifetime imaging, super resolution imaging, optical coherence tomography, Fourier ptychographics, and scattering media imaging, among others. DL has also pioneered novel functionalities for enhanced image interpretation, such as image classification/segmentation and cross-modality image transformation. These impressive achievements have empowered researchers to gain deeper insights into biophysical phenomena and develop potent tools for computer-aided diagnosis and surgical guidance.

This Special Issue aims to assemble recent research from biomedical imaging and sensing communities regarding DL applications, innovative biomedical imaging techniques and their novel applications. The scope of this Special Issue encompasses, but is not limited to, the following topics:

  • Deep learning algorithms for biomedical image and signal analysis, including fluorescence sensing and imaging, fluorescence lifetime sensing and imaging, optical coherence tomography, diffuse tomography, and endoscopy;
  • Deep learning for data analysis in biomedical sensors;
  • Biomedical image reconstruction, denoise, and resolution enhancement based on sensors;
  • Biomedical image and sensor signal classification and segmentation;
  • Biomedical imaging-assisted clinical diagnosis and surgical guidance;
  • Multi-modality image transformation based on sensors;
  • Object detection and localization based on sensors;
  • On device deep learning for biomedical sensing and imaging.

Dr. Dong Xiao
Dr. Yahui Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • biomedical optics
  • biomedical image reconstruction
  • biomedical image segmentation
  • computer aided detection
  • image-guided surgery and characterization
  • in vivo microscopy
  • biomedical image analysis
  • fluorescence imaging
  • fluorescence lifetime imaging
  • optical coherence tomography
  • diffuse tomography.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 2597 KiB  
Article
Multibranch Wavelet-Based Network for Image Demoiréing
by Chia-Hung Yeh, Chen Lo and Cheng-Han He
Sensors 2024, 24(9), 2762; https://doi.org/10.3390/s24092762 - 26 Apr 2024
Viewed by 333
Abstract
Moiré patterns caused by aliasing between the camera’s sensor and the monitor can severely degrade image quality. Image demoiréing is a multi-task image restoration method that includes texture and color restoration. This paper proposes a new multibranch wavelet-based image demoiréing network (MBWDN) for [...] Read more.
Moiré patterns caused by aliasing between the camera’s sensor and the monitor can severely degrade image quality. Image demoiréing is a multi-task image restoration method that includes texture and color restoration. This paper proposes a new multibranch wavelet-based image demoiréing network (MBWDN) for moiré pattern removal. Moiré images are separated into sub-band images using wavelet decomposition, and demoiréing can be achieved using the different learning strategies of two networks: moiré removal network (MRN) and detail-enhanced moiré removal network (DMRN). MRN removes moiré patterns from low-frequency images while preserving the structure of smooth areas. DMRN simultaneously removes high-frequency moiré patterns and enhances fine details in images. Wavelet decomposition is used to replace traditional upsampling, and max pooling effectively increases the receptive field of the network without losing the spatial information. Through decomposing the moiré image into different levels using wavelet transform, the feature learning results of each branch can be fully preserved and fed into the next branch; therefore, possible distortions in the recovered image are avoided. Thanks to the separation of high- and low-frequency images during feature training, the proposed two networks achieve impressive moiré removal effects. Based on extensive experiments conducted using public datasets, the proposed method shows good demoiréing validity both quantitatively and qualitatively when compared with the state-of-the-art approaches. Full article
(This article belongs to the Special Issue Advanced Deep Learning for Biomedical Sensing and Imaging)
Show Figures

Figure 1

31 pages, 7204 KiB  
Article
COVID-19 Hierarchical Classification Using a Deep Learning Multi-Modal
by Albatoul S. Althenayan, Shada A. AlSalamah, Sherin Aly, Thamer Nouh, Bassam Mahboub, Laila Salameh, Metab Alkubeyyer and Abdulrahman Mirza
Sensors 2024, 24(8), 2641; https://doi.org/10.3390/s24082641 - 20 Apr 2024
Viewed by 557
Abstract
Coronavirus disease 2019 (COVID-19), originating in China, has rapidly spread worldwide. Physicians must examine infected patients and make timely decisions to isolate them. However, completing these processes is difficult due to limited time and availability of expert radiologists, as well as limitations of [...] Read more.
Coronavirus disease 2019 (COVID-19), originating in China, has rapidly spread worldwide. Physicians must examine infected patients and make timely decisions to isolate them. However, completing these processes is difficult due to limited time and availability of expert radiologists, as well as limitations of the reverse-transcription polymerase chain reaction (RT-PCR) method. Deep learning, a sophisticated machine learning technique, leverages radiological imaging modalities for disease diagnosis and image classification tasks. Previous research on COVID-19 classification has encountered several limitations, including binary classification methods, single-feature modalities, small public datasets, and reliance on CT diagnostic processes. Additionally, studies have often utilized a flat structure, disregarding the hierarchical structure of pneumonia classification. This study aims to overcome these limitations by identifying pneumonia caused by COVID-19, distinguishing it from other types of pneumonia and healthy lungs using chest X-ray (CXR) images and related tabular medical data, and demonstrate the value of incorporating tabular medical data in achieving more accurate diagnoses. Resnet-based and VGG-based pre-trained convolutional neural network (CNN) models were employed to extract features, which were then combined using early fusion for the classification of eight distinct classes. We leveraged the hierarchal structure of pneumonia classification within our approach to achieve improved classification outcomes. Since an imbalanced dataset is common in this field, a variety of versions of generative adversarial networks (GANs) were used to generate synthetic data. The proposed approach tested in our private datasets of 4523 patients achieved a macro-avg F1-score of 95.9% and an F1-score of 87.5% for COVID-19 identification using a Resnet-based structure. In conclusion, in this study, we were able to create an accurate deep learning multi-modal to diagnose COVID-19 and differentiate it from other kinds of pneumonia and normal lungs, which will enhance the radiological diagnostic process. Full article
(This article belongs to the Special Issue Advanced Deep Learning for Biomedical Sensing and Imaging)
Show Figures

Figure 1

21 pages, 7189 KiB  
Article
Image Reconstruction Using Supervised Learning in Wearable Electrical Impedance Tomography of the Thorax
by Mikhail Ivanenko, Waldemar T. Smolik, Damian Wanta, Mateusz Midura, Przemysław Wróblewski, Xiaohan Hou and Xiaoheng Yan
Sensors 2023, 23(18), 7774; https://doi.org/10.3390/s23187774 - 9 Sep 2023
Cited by 2 | Viewed by 1669
Abstract
Electrical impedance tomography (EIT) is a non-invasive technique for visualizing the internal structure of a human body. Capacitively coupled electrical impedance tomography (CCEIT) is a new contactless EIT technique that can potentially be used as a wearable device. Recent studies have shown that [...] Read more.
Electrical impedance tomography (EIT) is a non-invasive technique for visualizing the internal structure of a human body. Capacitively coupled electrical impedance tomography (CCEIT) is a new contactless EIT technique that can potentially be used as a wearable device. Recent studies have shown that a machine learning-based approach is very promising for EIT image reconstruction. Most of the studies concern models containing up to 22 electrodes and focus on using different artificial neural network models, from simple shallow networks to complex convolutional networks. However, the use of convolutional networks in image reconstruction with a higher number of electrodes requires further investigation. In this work, two different architectures of artificial networks were used for CCEIT image reconstruction: a fully connected deep neural network and a conditional generative adversarial network (cGAN). The training dataset was generated by the numerical simulation of a thorax phantom with healthy and illness-affected lungs. Three kinds of illnesses, pneumothorax, pleural effusion, and hydropneumothorax, were modeled using the electrical properties of the tissues. The thorax phantom included the heart, aorta, spine, and lungs. The sensor with 32 area electrodes was used in the numerical model. The ECTsim custom-designed toolbox for Matlab was used to solve the forward problem and measurement simulation. Two artificial neural networks were trained with supervision for image reconstruction. Reconstruction quality was compared between those networks and one-step algebraic reconstruction methods such as linear back projection and pseudoinverse with Tikhonov regularization. This evaluation was based on pixel-to-pixel metrics such as root-mean-square error, structural similarity index, 2D correlation coefficient, and peak signal-to-noise ratio. Additionally, the diagnostic value measured by the ROC AUC metric was used to assess the image quality. The results showed that obtaining information about regional lung function (regions affected by pneumothorax or pleural effusion) is possible using image reconstruction based on supervised learning and deep neural networks in EIT. The results obtained using cGAN are strongly better than those obtained using a fully connected network, especially in the case of noisy measurement data. However, diagnostic value estimation showed that even algebraic methods allow us to obtain satisfactory results. Full article
(This article belongs to the Special Issue Advanced Deep Learning for Biomedical Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop