Special Issue "Medical Image Understanding and Analysis 2018"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 November 2018).

Special Issue Editors

Dr. Sasan Mahmoodi
Website
Guest Editor
School of Electronics and Computer Science, University of Southampton
Interests: signal and image processing; computer vision; segmentation; biometrics; medical image understanding
Prof. Dr. Mark Nixon
Website
Guest Editor
School of Electronics and Computer Science, University of Southampton
Interests: computer vision; biometrics; medical image understanding
Prof. Dr. Reyer Zwiggelaar
Website
Guest Editor
Department of Computer Science, Aberystwyth University, Aberystwyth, SY23 3DB, UK
Interests: medical image analysis; machine learning; pattern recognition

Special Issue Information

Dear Colleagues,

Medical Image Understanding and Analysis (MIUA) is a UK-based meeting for the communication of research related to image processing and analysis and its application to medical imaging and biomedicine. The conference provides an opportunity to present and discuss research in medical image understanding and analysis, which is a rapidly growing subject with ever increasing real-world applicability.

For its 22nd anniversary, the Medical Image Understanding and Analysis Conference—MIUA 2018—is returning to England, with its first visit to Southampton (https://miua2018.soton.ac.uk/).

The meetings are designed for the dissemination and discussion of research in medical image understanding and analysis, and aims to encourage the growth and raise the profile of this multi-disciplinary field by bringing together the various communities including among others:

  • Body imaging
  • Brain imaging
  • Magnetic Resonance Imaging (structural, diffusion and functional)
  • Optical Imaging
  • Positron Emission Imaging
  • Computed Tomography
  • X-Ray imaging
  • Ultrasound Imaging
  • Microscopy

Dr. Sasan Mahmoodi
Prof. Dr. Mark Nixon
Prof. Dr. Reyer Zwiggelaar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Medical Image Analysis
  • Machine Learning
  • Magnetic Resonance Imaging
  • Microscopy
  • Deep Learning
  • Image Registration and Segmentation
  • Pattern Recognition
  • Motion Analysis
  • Texture Analysis
  • Visualisation
  • Image Interpretation

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Classification of Microcalcification Clusters in Digital Mammograms Using a Stack Generalization Based Classifier
J. Imaging 2019, 5(9), 76; https://doi.org/10.3390/jimaging5090076 - 12 Sep 2019
Cited by 2 | Viewed by 2691
Abstract
This paper presents a machine learning based approach for the discrimination of malignant and benign microcalcification (MC) clusters in digital mammograms. A series of morphological operations was carried out to facilitate the feature extraction from segmented microcalcification. A combination of morphological, texture, and [...] Read more.
This paper presents a machine learning based approach for the discrimination of malignant and benign microcalcification (MC) clusters in digital mammograms. A series of morphological operations was carried out to facilitate the feature extraction from segmented microcalcification. A combination of morphological, texture, and distribution features from individual MC components and MC clusters were extracted and a correlation-based feature selection technique was used. The clinical relevance of the selected features is discussed. The proposed method was evaluated using three different databases: Optimam Mammography Image Database (OMI-DB), Digital Database for Screening Mammography (DDSM), and Mammographic Image Analysis Society (MIAS) database. The best classification accuracy ( 95.00 ± 0.57 %) was achieved for OPTIMAM using a stack generalization classifier with 10-fold cross validation obtaining an A z value equal to 0.97 ± 0.01 . Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Show Figures

Figure 1

Open AccessArticle
Segmentation and Modelling of the Nuclear Envelope of HeLa Cells Imaged with Serial Block Face Scanning Electron Microscopy
J. Imaging 2019, 5(9), 75; https://doi.org/10.3390/jimaging5090075 - 12 Sep 2019
Cited by 1 | Viewed by 2921
Abstract
This paper describes an unsupervised algorithm, which segments the nuclear envelope of HeLa cells imaged by Serial Block Face Scanning Electron Microscopy. The algorithm exploits the variations of pixel intensity in different cellular regions by calculating edges, which are then used to generate [...] Read more.
This paper describes an unsupervised algorithm, which segments the nuclear envelope of HeLa cells imaged by Serial Block Face Scanning Electron Microscopy. The algorithm exploits the variations of pixel intensity in different cellular regions by calculating edges, which are then used to generate superpixels. The superpixels are morphologically processed and those that correspond to the nuclear region are selected through the analysis of size, position, and correspondence with regions detected in neighbouring slices. The nuclear envelope is segmented from the nuclear region. The three-dimensional segmented nuclear envelope is then modelled against a spheroid to create a two-dimensional (2D) surface. The 2D surface summarises the complex 3D shape of the nuclear envelope and allows the extraction of metrics that may be relevant to characterise the nature of cells. The algorithm was developed and validated on a single cell and tested in six separate cells, each with 300 slices of 2000 × 2000 pixels. Ground truth was available for two of these cells, i.e., 600 hand-segmented slices. The accuracy of the algorithm was evaluated with two similarity metrics: Jaccard Similarity Index and Mean Hausdorff distance. Jaccard values of the first/second segmentation were 93%/90% for the whole cell, and 98%/94% between slices 75 and 225, as the central slices of the nucleus are more regular than those on the extremes. Mean Hausdorff distances were 9/17 pixels for the whole cells and 4/13 pixels for central slices. One slice was processed in approximately 8 s and a whole cell in 40 min. The algorithm outperformed active contours in both accuracy and time. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Show Figures

Figure 1

Open AccessArticle
Visualisation and Analysis of Speech Production with Electropalatography
J. Imaging 2019, 5(3), 40; https://doi.org/10.3390/jimaging5030040 - 15 Mar 2019
Cited by 1 | Viewed by 4350
Abstract
The process of speech production, i.e., the compression of air in the lungs, the vibration activity of the larynx, and the movement of the articulators, is of great interest in phonetics, phonology, and psychology. One technique by which speech production is analysed is [...] Read more.
The process of speech production, i.e., the compression of air in the lungs, the vibration activity of the larynx, and the movement of the articulators, is of great interest in phonetics, phonology, and psychology. One technique by which speech production is analysed is electropalatography, in which an artificial palate, moulded to the speaker’s hard palate, is introduced in the mouth. The palate contains a grid of electrodes, which monitor the spatial and temporal pattern of contact between the tongue and the palate during speech production. The output is a time sequence of images, known as palatograms, which show the 2D distribution of electrode activation. This paper describes a series of tools for the visualisation and analysis of palatograms and their associated sound signals. The tools are developed as Matlab® routines and released as an open-source toolbox. The particular focus is the analysis of the amount and direction of left–right asymmetry in tongue–palate contact during the production of different speech sounds. Asymmetry in the articulation of speech, as measured by electropalatography, may be related to the language under consideration, the speaker’s anatomy, irregularities in the palate manufacture, or speaker handedness (i.e., left or right). In addition, a pipeline for the segmentation and analysis of a three-dimensional computed tomography data set of an artificial palate is described and demonstrated. The segmentation procedure provides quantitative information about asymmetry that is due to a combination of speaker anatomy (the shape of the hard palate) and the positioning of the electrodes during manufacture of the artificial palate. The tools provided here should be useful in future studies of electropalatography. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Show Figures

Figure 1

Open AccessArticle
Analysis of Image Feature Characteristics for Automated Scoring of HER2 in Histology Slides
J. Imaging 2019, 5(3), 35; https://doi.org/10.3390/jimaging5030035 - 10 Mar 2019
Cited by 2 | Viewed by 3691
Abstract
The evaluation of breast cancer grades in immunohistochemistry (IHC) slides takes into account various types of visual markers and morphological features of stained membrane regions. Digital pathology algorithms using whole slide images (WSIs) of histology slides have recently been finding several applications in [...] Read more.
The evaluation of breast cancer grades in immunohistochemistry (IHC) slides takes into account various types of visual markers and morphological features of stained membrane regions. Digital pathology algorithms using whole slide images (WSIs) of histology slides have recently been finding several applications in such computer-assisted evaluations. Features that are directly related to biomarkers used by pathologists are generally preferred over the pixel values of entire images, even though the latter has more information content. This paper explores in detail various types of feature measurements that are suitable for the automated scoring of human epidermal growth factor receptor 2 (HER2) in histology slides. These are intensity features known as characteristic curves, texture features in the form of uniform local binary patterns (ULBPs), morphological features specifying connectivity of regions, and first-order statistical features of the overall intensity distribution. This paper considers important properties of the above features and outlines methods for reducing information redundancy, maximizing inter-class separability, and improving classification accuracy in the combined feature set. This paper also presents a detailed experimental analysis performed using the aforementioned features on a WSI dataset of IHC stained slides. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Show Figures

Figure 1

Open AccessArticle
Comparative Study on Local Binary Patterns for Mammographic Density and Risk Scoring
J. Imaging 2019, 5(2), 24; https://doi.org/10.3390/jimaging5020024 - 01 Feb 2019
Cited by 9 | Viewed by 3839
Abstract
Breast density is considered to be one of the major risk factors in developing breast cancer. High breast density can also affect the accuracy of mammographic abnormality detection due to the breast tissue characteristics and patterns. We reviewed variants of local binary pattern [...] Read more.
Breast density is considered to be one of the major risk factors in developing breast cancer. High breast density can also affect the accuracy of mammographic abnormality detection due to the breast tissue characteristics and patterns. We reviewed variants of local binary pattern descriptors to classify breast tissue which are widely used as texture descriptors for local feature extraction. In our study, we compared the classification results for the variants of local binary patterns such as classic LBP (Local Binary Pattern), ELBP (Elliptical Local Binary Pattern), Uniform ELBP, LDP (Local Directional Pattern) and M-ELBP (Mean-ELBP). A wider comparison with alternative texture analysis techniques was studied to investigate the potential of LBP variants in density classification. In addition, we investigated the effect on classification when using descriptors for the fibroglandular disk region and the whole breast region. We also studied the effect of the Region-of-Interest (ROI) size and location, the descriptor size, and the choice of classifier. The classification results were evaluated based on the MIAS database using a ten-run ten-fold cross validation approach. The experimental results showed that the Elliptical Local Binary Pattern descriptors and Local Directional Patterns extracted most relevant features for mammographic tissue classification indicating the relevance of directional filters. Similarly, the study showed that classification of features from ROIs of the fibroglandular disk region performed better than classification based on the whole breast region. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Show Figures

Figure 1

Open AccessArticle
Macrosight: A Novel Framework to Analyze the Shape and Movement of Interacting Macrophages Using Matlab®
J. Imaging 2019, 5(1), 17; https://doi.org/10.3390/jimaging5010017 - 14 Jan 2019
Cited by 2 | Viewed by 4048
Abstract
This paper presents a novel software framework, called macrosight, which incorporates routines to detect, track, and analyze the shape and movement of objects, with special emphasis on macrophages. The key feature presented in macrosight consists of an algorithm to assess the changes of [...] Read more.
This paper presents a novel software framework, called macrosight, which incorporates routines to detect, track, and analyze the shape and movement of objects, with special emphasis on macrophages. The key feature presented in macrosight consists of an algorithm to assess the changes of direction derived from cell–cell contact, where an interaction is assumed to occur. The main biological motivation is the determination of certain cell interactions influencing cell migration. Thus, the main objective of this work is to provide insights into the notion that interactions between cell structures cause a change in orientation. Macrosight analyzes the change of direction of cells before and after they come in contact with another cell. Interactions are determined when the cells overlap and form clumps of two or more cells. The framework integrates a segmentation technique capable of detecting overlapping cells and a tracking framework into a tool for the analysis of the trajectories of cells before and after they overlap. Preliminary results show promise into the analysis and the hypothesis proposed, and lays the groundwork for further developments. The extensive experimentation and data analysis show, with statistical significance, that under certain conditions, the movement changes before and after an interaction are different from movement in controlled cases. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Show Figures

Figure 1

Open AccessArticle
Enhancement and Segmentation Workflow for the Developing Zebrafish Vasculature
J. Imaging 2019, 5(1), 14; https://doi.org/10.3390/jimaging5010014 - 11 Jan 2019
Cited by 1 | Viewed by 4094
Abstract
Zebrafish have become an established in vivo vertebrate model to study cardiovascular development and disease. However, most published studies of the zebrafish vascular architecture rely on subjective visual assessment, rather than objective quantification. In this paper, we used state-of-the-art light sheet fluorescence microscopy [...] Read more.
Zebrafish have become an established in vivo vertebrate model to study cardiovascular development and disease. However, most published studies of the zebrafish vascular architecture rely on subjective visual assessment, rather than objective quantification. In this paper, we used state-of-the-art light sheet fluorescence microscopy to visualize the vasculature in transgenic fluorescent reporter zebrafish. Analysis of image quality, vascular enhancement methods, and segmentation approaches were performed in the framework of the open-source software Fiji to allow dissemination and reproducibility. Here, we build on a previously developed image processing pipeline; evaluate its applicability to a wider range of data; apply and evaluate an alternative vascular enhancement method; and, finally, suggest a work-flow for successful segmentation of the embryonic zebrafish vasculature. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Show Figures

Graphical abstract

Open AccessArticle
What’s in a Smile? Initial Analyses of Dynamic Changes in Facial Shape and Appearance
J. Imaging 2019, 5(1), 2; https://doi.org/10.3390/jimaging5010002 - 21 Dec 2018
Cited by 6 | Viewed by 3682
Abstract
Single-level principal component analysis (PCA) and multi-level PCA (mPCA) methods are applied here to a set of (2D frontal) facial images from a group of 80 Finnish subjects (34 male; 46 female) with two different facial expressions (smiling and neutral) per subject. Inspection [...] Read more.
Single-level principal component analysis (PCA) and multi-level PCA (mPCA) methods are applied here to a set of (2D frontal) facial images from a group of 80 Finnish subjects (34 male; 46 female) with two different facial expressions (smiling and neutral) per subject. Inspection of eigenvalues gives insight into the importance of different factors affecting shapes, including: biological sex, facial expression (neutral versus smiling), and all other variations. Biological sex and facial expression are shown to be reflected in those components at appropriate levels of the mPCA model. Dynamic 3D shape data for all phases of a smile made up a second dataset sampled from 60 adult British subjects (31 male; 29 female). Modes of variation reflected the act of smiling at the correct level of the mPCA model. Seven phases of the dynamic smiles are identified: rest pre-smile, onset 1 (acceleration), onset 2 (deceleration), apex, offset 1 (acceleration), offset 2 (deceleration), and rest post-smile. A clear cycle is observed in standardized scores at an appropriate level for mPCA and in single-level PCA. mPCA can be used to study static shapes and images, as well as dynamic changes in shape. It gave us much insight into the question “what’s in a smile?”. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Show Figures

Figure 1

Back to TopTop