sensors-logo

Journal Browser

Journal Browser

Machine Learning for Biomedical Imaging and Sensing II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (10 January 2023) | Viewed by 3548

Special Issue Editor


E-Mail Website
Guest Editor
Senior Scientist at Bioascent Drug Discovery, Newhouse, Lanarkshire ML1 5UH, UK
Interests: machine learning; imaging; drug discovery; data science; robotics and software development
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The use of machine learning techniques within the field of Biomedical Imaging & Sensing has risen in recent years. Applications within the literature have included diagnostics, image reconstruction, and the generation of synthetic human data. In terms of methodologies employed within this field, the last few years have seen the rise of deep learning techniques within this field, and the increasing use of novel methodologies such as generative adversarial networks and natural language generation. This field is also becoming increasingly accessible to researchers in Medicine and Biology who have not traditionally been machine learning practitioners, due to the availability of software libraries such as Keras and TensorFlow, and software packages such as WEKA.

The topics of interest include, but are not limited to, the following:

  • Classical machine learning techniques for image analysis;
  • Machine learning and health outcomes;
  • Machine learning for biomedical sensing;
  • The generation of synthetic patient data;
  • Quantitative image analysis;
  • Deep learning and diagnosis;
  • Biomedical image reconstruction;
  • The generation of natural language descriptions of biomedical images.
 

Dr. Andy Taylor
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • image analysis
  • generative adversarial networks
  • deep learning
  • sensing
  • image reconstruction
  • natural language generation
  • artificial intelligence
  • health informatics

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 1588 KiB  
Article
Classification of HEp-2 Staining Pattern Images Using Adapted Multilayer Perceptron Neural Network-Based Intra-Class Variation of Cell Shape
by Khamael Al-Dulaimi, Jasmine Banks, Aiman Al-Sabaawi, Kien Nguyen, Vinod Chandran and Inmaculada Tomeo-Reyes
Sensors 2023, 23(4), 2195; https://doi.org/10.3390/s23042195 - 15 Feb 2023
Cited by 1 | Viewed by 1525
Abstract
There exists a growing interest from the clinical practice research communities in the development of methods to automate HEp-2 stained cells classification procedure from histopathological images. Challenges faced by these methods include variations in cell densities and cell patterns, overfitting of features, large-scale [...] Read more.
There exists a growing interest from the clinical practice research communities in the development of methods to automate HEp-2 stained cells classification procedure from histopathological images. Challenges faced by these methods include variations in cell densities and cell patterns, overfitting of features, large-scale data volume and stained cells. In this paper, a multi-class multilayer perceptron technique is adapted by adding a new hidden layer to calculate the variation in the mean, scale, kurtosis and skewness of higher order spectra features of the cell shape information. The adapted technique is then jointly trained and the probability of classification calculated using a Softmax activation function. This method is proposed to address overfitting, stained and large-scale data volume problems, and classify HEp-2 staining cells into six classes. An extensive experimental analysis is studied to verify the results of the proposed method. The technique has been trained and tested on the dataset from ICPR-2014 and ICPR-2016 competitions using the Task-1. The experimental results have shown that the proposed model achieved higher accuracy of 90.3% (with data augmentation) than of 87.5% (with no data augmentation). In addition, the proposed framework is compared with existing methods, as well as, the results of methods using in ICPR2014 and ICPR2016 competitions.The results demonstrate that our proposed method effectively outperforms recent methods. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing II)
Show Figures

Figure 1

19 pages, 3240 KiB  
Article
A Deep Residual Neural Network for Image Reconstruction in Biomedical 3D Magnetic Induction Tomography
by Anna Hofmann, Martin Klein, Dirk Rueter and Andreas Sauer
Sensors 2022, 22(20), 7925; https://doi.org/10.3390/s22207925 - 18 Oct 2022
Cited by 3 | Viewed by 1389
Abstract
In recent years, it has become increasingly popular to solve inverse problems of various tomography methods with deep learning techniques. Here, a deep residual neural network (ResNet) is introduced to reconstruct the conductivity distribution of a biomedical, voluminous body in magnetic induction tomography [...] Read more.
In recent years, it has become increasingly popular to solve inverse problems of various tomography methods with deep learning techniques. Here, a deep residual neural network (ResNet) is introduced to reconstruct the conductivity distribution of a biomedical, voluminous body in magnetic induction tomography (MIT). MIT is a relatively new, contactless and noninvasive tomography method. However, the ill-conditioned inverse problem of MIT is challenging to solve, especially for voluminous bodies with conductivities in the range of biological tissue. The proposed ResNet can reconstruct up to two cuboid perturbation objects with conductivities of 0.0 and 1.0 S/m in the whole voluminous body, even in the difficult-to-detect centre. The dataset used for training and testing contained simulated signals of cuboid perturbation objects with randomised lengths and positions. Furthermore, special care went into avoiding the inverse crime while creating the dataset. The calculated metrics showed good results over the test dataset, with an average correlation coefficient of 0.87 and mean squared error of 0.001. Robustness was tested on three special test cases containing unknown shapes, conductivities and a real measurement that showed error results well within the margin of the metrics of the test dataset. This indicates that a good approximation of the inverse function in MIT for up to two perturbation objects was achieved and the inverse crime was avoided. Full article
(This article belongs to the Special Issue Machine Learning for Biomedical Imaging and Sensing II)
Show Figures

Figure 1

Back to TopTop