MIUA2019

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 April 2020) | Viewed by 28182

Special Issue Editors


E-Mail Website
Guest Editor
Department of Eye and Vision Science, University of Liverpool, Liverpool L3 5TR, UK
Interests: medical imaging; artificial intelligence; computer vision; image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Lancaster University, Lancaster, UK
Interests: variational models; optimization methods; image processing
University of Liverpool
Interests: variational models; optimization methods; image processing; deep learning

Special Issue Information

Dear Colleagues,

This Special Issue features the selected papers of the 23rd Medical Image Understanding and Analysis (MIUA) Conference held in Liverpool on 24–26 July 2019 (https://miua2019.com/). MIUA is an annual forum organized in the United Kingdom for communicating research progress within the community interested in biomedical image analysis. Its goals are the dissemination and discussion of research in medical image processing and analysis to encourage growth, raising the profile of this multidisciplinary field with an ever-increasing real-world applicability. The conference has been an excellent opportunity for researchers at all levels to network, generate new ideas, establish new collaborations, learn about and discuss different topics, and listen to speakers of international reputation, as well as presenting their own work in medical image analysis development.

The diverse range of topics covered in MIUA2019 reflect the growth in development and application of medical imaging. The main topics covered, among others, are (i) oncology and tumour imaging, (ii) lesion, wound and ulcer analysis, (iii) biostatistics, (iv) fetal imaging, (v) enhancement and reconstruction, (vi) diagnosis, classification and treatment, (vii) vessel and nerve analysis, (viii) image registration, (ix) image segmentation, and (x) ophthalmology.

Dr. Yalin Zheng
Dr. Bryan Williams
Prof. Ke Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biomarker discovery
  • Image enhancement
  • Segmentation
  • Registration
  • Texture analysis
  • Virtual reality
  • Image interpretation
  • Image-guided intervention
  • Modelling and simulation
  • Multi modal image analysis
  • Machine learning
  • Deep learning

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 4051 KiB  
Article
Investigating the Performance of Generative Adversarial Networks for Prostate Tissue Detection and Segmentation
by Ufuk Cem Birbiri, Azam Hamidinekoo, Amélie Grall, Paul Malcolm and Reyer Zwiggelaar
J. Imaging 2020, 6(9), 83; https://doi.org/10.3390/jimaging6090083 - 24 Aug 2020
Cited by 11 | Viewed by 4864
Abstract
The manual delineation of region of interest (RoI) in 3D magnetic resonance imaging (MRI) of the prostate is time-consuming and subjective. Correct identification of prostate tissue is helpful to define a precise RoI to be used in CAD systems in clinical practice during [...] Read more.
The manual delineation of region of interest (RoI) in 3D magnetic resonance imaging (MRI) of the prostate is time-consuming and subjective. Correct identification of prostate tissue is helpful to define a precise RoI to be used in CAD systems in clinical practice during diagnostic imaging, radiotherapy and monitoring the progress of disease. Conditional GAN (cGAN), cycleGAN and U-Net models and their performances were studied for the detection and segmentation of prostate tissue in 3D multi-parametric MRI scans. These models were trained and evaluated on MRI data from 40 patients with biopsy-proven prostate cancer. Due to the limited amount of available training data, three augmentation schemes were proposed to artificially increase the training samples. These models were tested on a clinical dataset annotated for this study and on a public dataset (PROMISE12). The cGAN model outperformed the U-Net and cycleGAN predictions owing to the inclusion of paired image supervision. Based on our quantitative results, cGAN gained a Dice score of 0.78 and 0.75 on the private and the PROMISE12 public datasets, respectively. Full article
(This article belongs to the Special Issue MIUA2019)
Show Figures

Figure 1

21 pages, 13701 KiB  
Article
Polyp Segmentation with Fully Convolutional Deep Neural Networks—Extended Evaluation Study
by Yunbo Guo, Jorge Bernal and Bogdan J. Matuszewski
J. Imaging 2020, 6(7), 69; https://doi.org/10.3390/jimaging6070069 - 13 Jul 2020
Cited by 41 | Viewed by 8684
Abstract
Analysis of colonoscopy images plays a significant role in early detection of colorectal cancer. Automated tissue segmentation can be useful for two of the most relevant clinical target applications—lesion detection and classification, thereby providing important means to make both processes more accurate and [...] Read more.
Analysis of colonoscopy images plays a significant role in early detection of colorectal cancer. Automated tissue segmentation can be useful for two of the most relevant clinical target applications—lesion detection and classification, thereby providing important means to make both processes more accurate and robust. To automate video colonoscopy analysis, computer vision and machine learning methods have been utilized and shown to enhance polyp detectability and segmentation objectivity. This paper describes a polyp segmentation algorithm, developed based on fully convolutional network models, that was originally developed for the Endoscopic Vision Gastrointestinal Image Analysis (GIANA) polyp segmentation challenges. The key contribution of the paper is an extended evaluation of the proposed architecture, by comparing it against established image segmentation benchmarks utilizing several metrics with cross-validation on the GIANA training dataset. Different experiments are described, including examination of various network configurations, values of design parameters, data augmentation approaches, and polyp characteristics. The reported results demonstrate the significance of the data augmentation, and careful selection of the method’s design parameters. The proposed method delivers state-of-the-art results with near real-time performance. The described solution was instrumental in securing the top spot for the polyp segmentation sub-challenge at the 2017 GIANA challenge and second place for the standard image resolution segmentation task at the 2018 GIANA challenge. Full article
(This article belongs to the Special Issue MIUA2019)
Show Figures

Figure 1

16 pages, 786 KiB  
Article
Spatial Linear Mixed Effects Modelling for OCT Images: SLME Model
by Wenyue Zhu, Jae Yee Ku, Yalin Zheng, Paul C. Knox, Ruwanthi Kolamunnage-Dona and Gabriela Czanner
J. Imaging 2020, 6(6), 44; https://doi.org/10.3390/jimaging6060044 - 05 Jun 2020
Cited by 1 | Viewed by 3428
Abstract
Much recent research focuses on how to make disease detection more accurate as well as “slimmer”, i.e., allowing analysis with smaller datasets. Explanatory models are a hot research topic because they explain how the data are generated. We propose a spatial explanatory modelling [...] Read more.
Much recent research focuses on how to make disease detection more accurate as well as “slimmer”, i.e., allowing analysis with smaller datasets. Explanatory models are a hot research topic because they explain how the data are generated. We propose a spatial explanatory modelling approach that combines Optical Coherence Tomography (OCT) retinal imaging data with clinical information. Our model consists of a spatial linear mixed effects inference framework, which innovatively models the spatial topography of key information via mixed effects and spatial error structures, thus effectively modelling the shape of the thickness map. We show that our spatial linear mixed effects (SLME) model outperforms traditional analysis-of-variance approaches in the analysis of Heidelberg OCT retinal thickness data from a prospective observational study, involving 300 participants with diabetes and 50 age-matched controls. Our SLME model has a higher power for detecting the difference between disease groups, and it shows where the shape of retinal thickness profiles differs between the eyes of participants with diabetes and the eyes of healthy controls. In simulated data, the SLME model demonstrates how incorporating spatial correlations can increase the accuracy of the statistical inferences. This model is crucial in the understanding of the progression of retinal thickness changes in diabetic maculopathy to aid clinicians for early planning of effective treatment. It can be extended to disease monitoring and prognosis in other diseases and with other imaging technologies. Full article
(This article belongs to the Special Issue MIUA2019)
Show Figures

Figure 1

14 pages, 1132 KiB  
Article
Examining the Relationship between Semiquantitative Methods Analysing Concentration-Time and Enhancement-Time Curves from Dynamic-Contrast Enhanced Magnetic Resonance Imaging and Cerebrovascular Dysfunction in Small Vessel Disease
by Jose Bernal, María Valdés-Hernández, Javier Escudero, Eleni Sakka, Paul A. Armitage, Stephen Makin, Rhian M. Touyz and Joanna M. Wardlaw
J. Imaging 2020, 6(6), 43; https://doi.org/10.3390/jimaging6060043 - 05 Jun 2020
Viewed by 3282
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) can be used to examine the distribution of an intravenous contrast agent within the brain. Computational methods have been devised to analyse the contrast uptake/washout over time as reflections of cerebrovascular dysfunction. However, there have been few [...] Read more.
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) can be used to examine the distribution of an intravenous contrast agent within the brain. Computational methods have been devised to analyse the contrast uptake/washout over time as reflections of cerebrovascular dysfunction. However, there have been few direct comparisons of their relative strengths and weaknesses. In this paper, we compare five semiquantitative methods comprising the slope and area under the enhancement-time curve, the slope and area under the concentration-time curve ( S l o p e C o n and A U C C o n ), and changes in the power spectrum over time. We studied them in cerebrospinal fluid, normal tissues, stroke lesions, and white matter hyperintensities (WMH) using DCE-MRI scans from a cohort of patients with small vessel disease (SVD) who presented mild stroke. The total SVD score was associated with A U C C o n in WMH ( p < 0.05 ), but not with the other four methods. In WMH, we found higher A U C C o n was associated with younger age ( p < 0.001 ) and fewer WMH ( p < 0.001 ), whereas S l o p e C o n increased with younger age ( p > 0.05 ) and WMH burden ( p > 0.05 ). Our results show the potential of different measures extracted from concentration-time curves extracted from the same DCE examination to demonstrate cerebrovascular dysfunction better than those extracted from enhancement-time curves. Full article
(This article belongs to the Special Issue MIUA2019)
Show Figures

Figure 1

13 pages, 7252 KiB  
Article
Comparative Study of Contact Repulsion in Control and Mutant Macrophages Using a Novel Interaction Detection
by José Alonso Solís-Lemus, Besaiz J Sánchez-Sánchez, Stefania Marcotti, Mubarik Burki, Brian Stramer and Constantino Carlos Reyes-Aldasoro
J. Imaging 2020, 6(5), 36; https://doi.org/10.3390/jimaging6050036 - 20 May 2020
Viewed by 4242
Abstract
In this paper, a novel method for interaction detection is presented to compare the contact dynamics of macrophages in the Drosophila embryo. The study is carried out by a framework called macrosight, which analyses the movement and interaction of migrating macrophages. The framework [...] Read more.
In this paper, a novel method for interaction detection is presented to compare the contact dynamics of macrophages in the Drosophila embryo. The study is carried out by a framework called macrosight, which analyses the movement and interaction of migrating macrophages. The framework incorporates a segmentation and tracking algorithm into analysing the motion characteristics of cells after contact. In this particular study, the interactions between cells is characterised in the case of control embryos and Shot mutants, a candidate protein that is hypothesised to regulate contact dynamics between migrating cells. Statistical significance between control and mutant cells was found when comparing the direction of motion after contact in specific conditions. Such discoveries provide insights for future developments in combining biological experiments with computational analysis. Full article
(This article belongs to the Special Issue MIUA2019)
Show Figures

Figure 1

17 pages, 1322 KiB  
Article
Multilevel Analysis of the Influence of Maternal Smoking and Alcohol Consumption on the Facial Shape of English Adolescents
by Jennifer Galloway, Damian J.J. Farnell, Stephen Richmond and Alexei I. Zhurov
J. Imaging 2020, 6(5), 34; https://doi.org/10.3390/jimaging6050034 - 18 May 2020
Cited by 4 | Viewed by 3153
Abstract
This cross-sectional study aims to assess the influence of maternal smoking and alcohol consumption during pregnancy on the facial shape of non-syndromic English adolescents and demonstrate the potential benefits of using multilevel principal component analysis (mPCA). A cohort of 3755 non-syndromic 15-year-olds from [...] Read more.
This cross-sectional study aims to assess the influence of maternal smoking and alcohol consumption during pregnancy on the facial shape of non-syndromic English adolescents and demonstrate the potential benefits of using multilevel principal component analysis (mPCA). A cohort of 3755 non-syndromic 15-year-olds from the Avon Longitudinal Study of Parents and Children (ALSPAC), England, were included. Maternal smoking and alcohol consumption during the 1st and 2nd trimesters of pregnancy were determined via questionnaire at 18 weeks gestation. 21 facial landmarks, used as a proxy for the main facial features, were manually plotted onto 3D facial scans of the participants. The effect of maternal smoking and maternal alcohol consumption (average 1–2 glasses per week) was minimal, with 0.66% and 0.48% of the variation in the 21 landmarks of non-syndromic offspring explained, respectively. This study provides a further example of mPCA being used effectively as a descriptive analysis in facial shape research. This is the first example of mPCA being extended to four levels to assess the influence of environmental factors. Further work on the influence of high/low levels of smoking and alcohol and providing inferential evidence is required. Full article
(This article belongs to the Special Issue MIUA2019)
Show Figures

Figure 1

Back to TopTop