sensors-logo

Journal Browser

Journal Browser

Image Analysis and Biomedical Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 13203

Special Issue Editors


E-Mail Website
Guest Editor
Computer Vision and Robotics Institute (VICOROB), University of Girona, 17004 Girona, Spain
Interests: medical image analysis; machine learning; multimodal imaging; computer aided detection and diagnosis

E-Mail Website
Guest Editor
Computer Vision and Robotics Institute (VICOROB), University of Girona, 17004 Girona, Spain
Interests: computer vision; image analysis; computer aided detection and diagnosis

Special Issue Information

Dear Colleagues,

Recent advances on medical image analysis, mostly driven by deep learning developments, have shown a great impact on improving the current state of the art on disease detection, diagnosis, monitoring and prognosis, both on pre-clinical and clinical scenarios. In addition, research in medical imaging sensors and health monitoring technologies have seen an important growth due to both hardware and software advancements, powered by big data, artificial intelligence and virtual clinical trials.

This Special Issue would like to cover the link between the image analysis aspects and biomedical sensors, including medical physics, acquisition and reconstruction aspects. This could include novel proposals, methods, algorithms focusing on developing new biomarkers, improving disease detection and diagnosis but also advancements on prediction and prognosis. Clinical evaluation aspects such as robustness, generalisation across different sensors and vendors is regarded as a current limitation of many image analysis systems. Therefore advancements towards these aspects, including the development and evaluation of virtual clinical trials are encouraged to be covered in this Special Issue.

The core themes of this Special Issue include, but are not limited to:

  • Advances in image analysis for disease detection, diagnosis and/or monitoring, including, but not limited to MRI, X-ray, PET, ultrasound, etc.
  • Advances in multimodal systems for diagnosis, prognosis, treatment, and/or prevention.
  • Pre-clinical, clinical, and in silico (virtual clinical trials) applications of novel image analysis and/or biomedical sensing technologies, including but not limited to cancer imaging, neuroimaging, cardiothoracic imaging, aging, etc.
  • Artificial intelligence and machine learning methods for biomedical image and signal analysis.

Dr. Robert Martí
Dr. Joan Martí Bonmatí
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical image analysis
  • biomedical sensors
  • clinical and virtual clinical trials
  • machine learning
  • artificial intelligence

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 9314 KiB  
Article
MAM-E: Mammographic Synthetic Image Generation with Diffusion Models
by Ricardo Montoya-del-Angel, Karla Sam-Millan, Joan C. Vilanova and Robert Martí
Sensors 2024, 24(7), 2076; https://doi.org/10.3390/s24072076 - 24 Mar 2024
Viewed by 854
Abstract
Generative models are used as an alternative data augmentation technique to alleviate the data scarcity problem faced in the medical imaging field. Diffusion models have gathered special attention due to their innovative generation approach, the high quality of the generated images, and their [...] Read more.
Generative models are used as an alternative data augmentation technique to alleviate the data scarcity problem faced in the medical imaging field. Diffusion models have gathered special attention due to their innovative generation approach, the high quality of the generated images, and their relatively less complex training process compared with Generative Adversarial Networks. Still, the implementation of such models in the medical domain remains at an early stage. In this work, we propose exploring the use of diffusion models for the generation of high-quality, full-field digital mammograms using state-of-the-art conditional diffusion pipelines. Additionally, we propose using stable diffusion models for the inpainting of synthetic mass-like lesions on healthy mammograms. We introduce MAM-E, a pipeline of generative models for high-quality mammography synthesis controlled by a text prompt and capable of generating synthetic mass-like lesions on specific regions of the breast. Finally, we provide quantitative and qualitative assessment of the generated images and easy-to-use graphical user interfaces for mammography synthesis. Full article
(This article belongs to the Special Issue Image Analysis and Biomedical Sensors)
Show Figures

Figure 1

15 pages, 4714 KiB  
Article
Inpainting Saturation Artifact in Anterior Segment Optical Coherence Tomography
by Jie Li, He Zhang, Xiaoli Wang, Haoming Wang, Jingzi Hao and Guanhua Bai
Sensors 2023, 23(23), 9439; https://doi.org/10.3390/s23239439 - 27 Nov 2023
Viewed by 703
Abstract
The cornea is an important refractive structure in the human eye. The corneal segmentation technique provides valuable information for clinical diagnoses, such as corneal thickness. Non-contact anterior segment optical coherence tomography (AS-OCT) is a prevalent ophthalmic imaging technique that can visualize the anterior [...] Read more.
The cornea is an important refractive structure in the human eye. The corneal segmentation technique provides valuable information for clinical diagnoses, such as corneal thickness. Non-contact anterior segment optical coherence tomography (AS-OCT) is a prevalent ophthalmic imaging technique that can visualize the anterior and posterior surfaces of the cornea. Nonetheless, during the imaging process, saturation artifacts are commonly generated due to the tangent of the corneal surface at that point, which is normal to the incident light source. This stripe-shaped saturation artifact covers the corneal surface, causing blurring of the corneal edge, reducing the accuracy of corneal segmentation. To settle this matter, an inpainting method that introduces structural similarity and frequency loss is proposed to remove the saturation artifact in AS-OCT images. Specifically, the structural similarity loss reconstructs the corneal structure and restores corneal textural details. The frequency loss combines the spatial domain with the frequency domain to ensure the overall consistency of the image in both domains. Furthermore, the performance of the proposed method in corneal segmentation tasks is evaluated, and the results indicate a significant benefit for subsequent clinical analysis. Full article
(This article belongs to the Special Issue Image Analysis and Biomedical Sensors)
Show Figures

Figure 1

17 pages, 22867 KiB  
Article
Convolutional Networks and Transformers for Mammography Classification: An Experimental Study
by Marco Cantone, Claudio Marrocco, Francesco Tortorella and Alessandro Bria
Sensors 2023, 23(3), 1229; https://doi.org/10.3390/s23031229 - 20 Jan 2023
Cited by 10 | Viewed by 2334
Abstract
Convolutional Neural Networks (CNN) have received a large share of research in mammography image analysis due to their capability of extracting hierarchical features directly from raw data. Recently, Vision Transformers are emerging as viable alternative to CNNs in medical imaging, in some cases [...] Read more.
Convolutional Neural Networks (CNN) have received a large share of research in mammography image analysis due to their capability of extracting hierarchical features directly from raw data. Recently, Vision Transformers are emerging as viable alternative to CNNs in medical imaging, in some cases performing on par or better than their convolutional counterparts. In this work, we conduct an extensive experimental study to compare the most recent CNN and Vision Transformer architectures for whole mammograms classification. We selected, trained and tested 33 different models, 19 convolutional- and 14 transformer-based, on the largest publicly available mammography image database OMI-DB. We also performed an analysis of the performance at eight different image resolutions and considering all the individual lesion categories in isolation (masses, calcifications, focal asymmetries, architectural distortions). Our findings confirm the potential of visual transformers, which performed on par with traditional CNNs like ResNet, but at the same time show a superiority of modern convolutional networks like EfficientNet. Full article
(This article belongs to the Special Issue Image Analysis and Biomedical Sensors)
Show Figures

Figure 1

15 pages, 3272 KiB  
Article
Intelligent Labeling of Tumor Lesions Based on Positron Emission Tomography/Computed Tomography
by Shiping Ye, Chaoxiang Chen, Zhican Bai, Jinming Wang, Xiaoxaio Yao and Olga Nedzvedz
Sensors 2022, 22(14), 5171; https://doi.org/10.3390/s22145171 - 10 Jul 2022
Viewed by 1341
Abstract
Positron emission tomography/computed tomography (PET/CT) plays a vital role in diagnosing tumors. However, PET/CT imaging relies primarily on manual interpretation and labeling by medical professionals. An enormous workload will affect the training samples’ construction for deep learning. The labeling of tumor lesions in [...] Read more.
Positron emission tomography/computed tomography (PET/CT) plays a vital role in diagnosing tumors. However, PET/CT imaging relies primarily on manual interpretation and labeling by medical professionals. An enormous workload will affect the training samples’ construction for deep learning. The labeling of tumor lesions in PET/CT images involves the intersection of computer graphics and medicine, such as registration, a fusion of medical images, and labeling of lesions. This paper extends the linear interpolation, enhances it in a specific area of the PET image, and uses the outer frame scaling of the PET/CT image and the least-squares residual affine method. The PET and CT images are subjected to wavelet transformation and then synthesized in proportion to form a PET/CT fusion image. According to the absorption of 18F-FDG (fluoro deoxy glucose) SUV in the PET image, the professionals randomly select a point in the focus area in the fusion image, and the system will automatically select the seed point of the focus area to delineate the tumor focus with the regional growth method. Finally, the focus delineated on the PET and CT fusion images is automatically mapped to CT images in the form of polygons, and rectangular segmentation and labeling are formed. This study took the actual PET/CT of patients with lymphatic cancer as an example. The semiautomatic labeling of the system and the manual labeling of imaging specialists were compared and verified. The recognition rate was 93.35%, and the misjudgment rate was 6.52%. Full article
(This article belongs to the Special Issue Image Analysis and Biomedical Sensors)
Show Figures

Figure 1

38 pages, 28279 KiB  
Article
FPI Based Hyperspectral Imager for the Complex Surfaces—Calibration, Illumination and Applications
by Anna-Maria Raita-Hakola, Leevi Annala, Vivian Lindholm, Roberts Trops, Antti Näsilä, Heikki Saari, Annamari Ranki and Ilkka Pölönen
Sensors 2022, 22(9), 3420; https://doi.org/10.3390/s22093420 - 29 Apr 2022
Cited by 2 | Viewed by 2503
Abstract
Hyperspectral imaging (HSI) applications for biomedical imaging and dermatological applications have been recently under research interest. Medical HSI applications are non-invasive methods with high spatial and spectral resolution. HS imaging can be used to delineate malignant tumours, detect invasions, and classify lesion types. [...] Read more.
Hyperspectral imaging (HSI) applications for biomedical imaging and dermatological applications have been recently under research interest. Medical HSI applications are non-invasive methods with high spatial and spectral resolution. HS imaging can be used to delineate malignant tumours, detect invasions, and classify lesion types. Typical challenges of these applications relate to complex skin surfaces, leaving some skin areas unreachable. In this study, we introduce a novel spectral imaging concept and conduct a clinical pre-test, the findings of which can be used to develop the concept towards a clinical application. The SICSURFIS spectral imager concept combines a piezo-actuated Fabry–Pérot interferometer (FPI) based hyperspectral imager, a specially designed LED module and several sizes of stray light protection cones for reaching and adapting to the complex skin surfaces. The imager is designed for the needs of photometric stereo imaging for providing the skin surface models (3D) for each captured wavelength. The captured HS images contained 33 selected wavelengths (ranging from 477 nm to 891 nm), which were captured simultaneously with accordingly selected LEDs and three specific angles of light. The pre-test results show that the data collected with the new SICSURFIS imager enable the use of the spectral and spatial domains with surface model information. The imager can reach complex skin surfaces. Healthy skin, basal cell carcinomas and intradermal nevi lesions were classified and delineated pixel-wise with promising results, but further studies are needed. The results were obtained with a convolutional neural network. Full article
(This article belongs to the Special Issue Image Analysis and Biomedical Sensors)
Show Figures

Figure 1

22 pages, 4503 KiB  
Article
Selection of Filtering and Image Texture Analysis in the Radiographic Images Processing of Horses’ Incisor Teeth Affected by the EOTRH Syndrome
by Kamil Górski, Marta Borowska, Elżbieta Stefanik, Izabela Polkowska, Bernard Turek, Andrzej Bereznowski and Małgorzata Domino
Sensors 2022, 22(8), 2920; https://doi.org/10.3390/s22082920 - 11 Apr 2022
Cited by 4 | Viewed by 1832
Abstract
Equine odontoclastic tooth resorption and hypercementosis (EOTRH) is one of the horses’ dental diseases, mainly affecting the incisor teeth. An increase in the incidence of aged horses and a painful progressive course of the disease create the need for improved early diagnosis. Besides [...] Read more.
Equine odontoclastic tooth resorption and hypercementosis (EOTRH) is one of the horses’ dental diseases, mainly affecting the incisor teeth. An increase in the incidence of aged horses and a painful progressive course of the disease create the need for improved early diagnosis. Besides clinical findings, EOTRH recognition is based on the typical radiographic findings, including levels of dental resorption and hypercementosis. This study aimed to introduce digital processing methods to equine dental radiographic images and identify texture features changing with disease progression. The radiographs of maxillary incisor teeth from 80 horses were obtained. Each incisor was annotated by separate masks and clinically classified as 0, 1, 2, or 3 EOTRH degrees. Images were filtered by Mean, Median, Normalize, Bilateral, Binomial, CurvatureFlow, LaplacianSharpening, DiscreteGaussian, and SmoothingRecursiveGaussian filters independently, and 93 features of image texture were extracted using First Order Statistics (FOS), Gray Level Co-occurrence Matrix (GLCM), Neighbouring Gray Tone Difference Matrix (NGTDM), Gray Level Dependence Matrix (GLDM), Gray Level Run Length Matrix (GLRLM), and Gray Level Size Zone Matrix (GLSZM) approaches. The most informative processing was selected. GLCM and GLRLM return the most favorable features for the quantitative evaluation of radiographic signs of the EOTRH syndrome, which may be supported by filtering by filters improving the edge delimitation. Full article
(This article belongs to the Special Issue Image Analysis and Biomedical Sensors)
Show Figures

Figure 1

11 pages, 1491 KiB  
Article
Estimation of Stroke Volume Variance from Arterial Blood Pressure: Using a 1-D Convolutional Neural Network
by Hye-Mee Kwon, Woo-Young Seo, Jae-Man Kim, Woo-Hyun Shim, Sung-Hoon Kim and Gyu-Sam Hwang
Sensors 2021, 21(15), 5130; https://doi.org/10.3390/s21155130 - 29 Jul 2021
Cited by 4 | Viewed by 2283
Abstract
Background: We aimed to create a novel model using a deep learning method to estimate stroke volume variation (SVV), a widely used predictor of fluid responsiveness, from arterial blood pressure waveform (ABPW). Methods: In total, 557 patients and 8,512,564 SVV datasets were collected [...] Read more.
Background: We aimed to create a novel model using a deep learning method to estimate stroke volume variation (SVV), a widely used predictor of fluid responsiveness, from arterial blood pressure waveform (ABPW). Methods: In total, 557 patients and 8,512,564 SVV datasets were collected and were divided into three groups: training, validation, and test. Data was composed of 10 s of ABPW and corresponding SVV data recorded every 2 s. We built a convolutional neural network (CNN) model to estimate SVV from the ABPW with pre-existing commercialized model (EV1000) as a reference. We applied pre-processing, multichannel, and dimension reduction to improve the CNN model with diversified inputs. Results: Our CNN model showed an acceptable performance with sample data (r = 0.91, MSE = 6.92). Diversification of inputs, such as normalization, frequency, and slope of ABPW significantly improved the model correlation (r = 0.95), lowered mean squared error (MSE = 2.13), and resulted in a high concordance rate (96.26%) with the SVV from the commercialized model. Conclusions: We developed a new CNN deep-learning model to estimate SVV. Our CNN model seems to be a viable alternative when the necessary medical device is not available, thereby allowing a wider range of application and resulting in optimal patient management. Full article
(This article belongs to the Special Issue Image Analysis and Biomedical Sensors)
Show Figures

Figure 1

Back to TopTop