Special Issue "Selected Papers from “MIUA 2017”"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (7 November 2017)

Special Issue Editors

Guest Editor
Dr. Maria del C. Valdés Hernández

Lecturer in Image Analysis and Chair MIUA 2017 Centre for Clinical Brain Sciences (CCBS), Neuroimaging Sciences, The University of Edinburgh, Chancellor’s Building, 49 Little France Crescent, Edinburgh EH16 4SB, UK
Website | E-Mail
Interests: brain; image analysis; cerebrovascular diseases; neurodegenerative diseases
Guest Editor
Dr. Victor Gonzalez-Castro

Department of Electrics, Systems and Automatics Engineering,Universidad de León. Campus de Vegazana s/n, 24071 León, Spain
Website | E-Mail
Interests: image analysis; pattern recognition; machine learning; medical imaging

Special Issue Information

Dear Colleagues,

Medical Image Understanding and Analysis (MIUA) 2017 (https://miua2017.wordpress.com/) is the 21st conference of the Medical Image Understanding and Analysis series organised in the United Kingdom for communicating research progress within the community interested in biomedical image analysis. Its goals are the dissemination and discussion of research in medical image processing and analysis and aims to encourage the growth and raise the profile of this multi-disciplinary field by bringing together specialists, academics, engineers, image analysts and clinicians from various communities, including human body, lung, brain and cardiac imaging, pre-clinical, microscopy and animal images, medical physics, anatomy, physiology, oncology, dermatology, neurology, radiology, ophtalmology, ultrasound, magnetic resonance, positron emission imaging, and computed tomography among others.

The conference covers the following topics in the field of medical imaging: Big Data Processing, Clinical and Scientific Evaluation of Imaging Studies, Computer-Aided Pathology, Computer-Aided Radiology, Computer-Assisted Surgery, Data Fusion, Data Compression and Anonimisation, Protocol Development and Standardisation, Decision Support, Discovery of Imaging Biomarkers, Human Computer Interaction, Image Interpretation, Image-Guided Intervention, Image Formation and Reconstruction, Image Perception, Image Registration, Image Segmentation, Intelligent Imaging Systems, Machine Learning in Medical Imaging, Modelling and Simulation, Motion Analysis, Multi-Modality Image Analysis, Pattern and Feature Recognition, Quantitative Image Analysis, Shape Analysis, Software Development, Super-Resolution Algorithms, Statistical Methods in Imaging, Systematic Testing and Validation, Texture Analysis, Image Enhancement, Time series analyses and Virtual Reality Visualisation.

The conference constitutes an excellent opportunity to network, generate new ideas, establish new collaborations, learn and discuss on different topics, listen to proferred speakers of international reputation, present and show medical image analysis software and even experience the delights of a genuine Scottish ceilidh.

Dr. Maria del C. Valdés Hernández
Dr. Victor Gonzalez-Castro
Guest Editors

MIUA_2017_logo

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Medical image analysis
  • Machine learning
  • Magnetic resonance imaging
  • Microscopy
  • Positron emission imaging
  • Ultrasound

Published Papers (19 papers)

View options order results:
result details:
Displaying articles 1-19
Export citation of selected articles as:

Research

Open AccessArticle Feature Importance for Human Epithelial (HEp-2) Cell Image Classification
J. Imaging 2018, 4(3), 46; https://doi.org/10.3390/jimaging4030046
Received: 7 November 2017 / Revised: 9 February 2018 / Accepted: 16 February 2018 / Published: 26 February 2018
PDF Full-text (1285 KB) | HTML Full-text | XML Full-text
Abstract
Indirect Immuno-Fluorescence (IIF) microscopy imaging of human epithelial (HEp-2) cells is a popular method for diagnosing autoimmune diseases. Considering large data volumes, computer-aided diagnosis (CAD) systems, based on image-based classification, can help in terms of time, effort, and reliability of diagnosis. Such approaches
[...] Read more.
Indirect Immuno-Fluorescence (IIF) microscopy imaging of human epithelial (HEp-2) cells is a popular method for diagnosing autoimmune diseases. Considering large data volumes, computer-aided diagnosis (CAD) systems, based on image-based classification, can help in terms of time, effort, and reliability of diagnosis. Such approaches are based on extracting some representative features from the images. This work explores the selection of the most distinctive features for HEp-2 cell images using various feature selection (FS) methods. Considering that there is no single universally optimal feature selection technique, we also propose hybridization of one class of FS methods (filter methods). Furthermore, the notion of variable importance for ranking features, provided by another type of approaches (embedded methods such as Random forest, Random uniform forest) is exploited to select a good subset of features from a large set, such that addition of new features does not increase classification accuracy. In this work, we have also, with great consideration, designed class-specific features to capture morphological visual traits of the cell patterns. We perform various experiments and discussions to demonstrate the effectiveness of FS methods along with proposed and a standard feature set. We achieve state-of-the-art performance even with small number of features, obtained after the feature selection. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Image Features Based on Characteristic Curves and Local Binary Patterns for Automated HER2 Scoring
J. Imaging 2018, 4(2), 35; https://doi.org/10.3390/jimaging4020035
Received: 30 October 2017 / Revised: 1 February 2018 / Accepted: 2 February 2018 / Published: 5 February 2018
PDF Full-text (2995 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents novel feature descriptors and classification algorithms for the automated scoring of HER2 in Whole Slide Images (WSI) of breast cancer histology slides. Since a large amount of processing is involved in analyzing WSI images, the primary design goal has been
[...] Read more.
This paper presents novel feature descriptors and classification algorithms for the automated scoring of HER2 in Whole Slide Images (WSI) of breast cancer histology slides. Since a large amount of processing is involved in analyzing WSI images, the primary design goal has been to keep the computational complexity to the minimum possible level and to use simple, yet robust feature descriptors that can provide accurate classification of the slides. We propose two types of feature descriptors that encode important information about staining patterns and the percentage of staining present in ImmunoHistoChemistry (IHC)-stained slides. The first descriptor is called a characteristic curve, which is a smooth non-increasing curve that represents the variation of percentage of staining with saturation levels. The second new descriptor introduced in this paper is a local binary pattern (LBP) feature curve, which is also a non-increasing smooth curve that represents the local texture of the staining patterns. Both descriptors show excellent interclass variance and intraclass correlation and are suitable for the design of automatic HER2 classification algorithms. This paper gives the detailed theoretical aspects of the feature descriptors and also provides experimental results and a comparative analysis. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Estimating Full Regional Skeletal Muscle Fibre Orientation from B-Mode Ultrasound Images Using Convolutional, Residual, and Deconvolutional Neural Networks
J. Imaging 2018, 4(2), 29; https://doi.org/10.3390/jimaging4020029
Received: 8 November 2017 / Revised: 17 January 2018 / Accepted: 22 January 2018 / Published: 29 January 2018
PDF Full-text (6231 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper presents an investigation into the feasibility of using deep learning methods for developing arbitrary full spatial resolution regression analysis of B-mode ultrasound images of human skeletal muscle. In this study, we focus on full spatial analysis of muscle fibre orientation, since
[...] Read more.
This paper presents an investigation into the feasibility of using deep learning methods for developing arbitrary full spatial resolution regression analysis of B-mode ultrasound images of human skeletal muscle. In this study, we focus on full spatial analysis of muscle fibre orientation, since there is an existing body of work with which to compare results. Previous attempts to automatically estimate fibre orientation from ultrasound are not adequate, often requiring manual region selection, feature engineering, providing low-resolution estimations (one angle per muscle) and deep muscles are often not attempted. We build upon our previous work in which automatic segmentation was used with plain convolutional neural network (CNN) and deep residual convolutional network (ResNet) architectures, to predict a low-resolution map of fibre orientation in extracted muscle regions. Here, we use deconvolutions and max-unpooling (DCNN) to regularise and improve predicted fibre orientation maps for the entire image, including deep muscles, removing the need for automatic segmentation and we compare our results with the CNN and ResNet, as well as a previously established feature engineering method, on the same task. Dynamic ultrasound images sequences of the calf muscles were acquired (25 Hz) from 8 healthy volunteers (4 male, ages: 25–36, median 30). A combination of expert annotation and interpolation/extrapolation provided labels of regional fibre orientation for each image. Neural networks (CNN, ResNet, DCNN) were then trained both with and without dropout using leave one out cross-validation. Our results demonstrated robust estimation of full spatial fibre orientation within approximately 6° error, which was an improvement on previous methods. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Stable Image Registration for In-Vivo Fetoscopic Panorama Reconstruction
J. Imaging 2018, 4(1), 24; https://doi.org/10.3390/jimaging4010024
Received: 31 October 2017 / Revised: 8 January 2018 / Accepted: 9 January 2018 / Published: 19 January 2018
PDF Full-text (29827 KB) | HTML Full-text | XML Full-text
Abstract
A Twin-to-Twin Transfusion Syndrome (TTTS) is a condition that occurs in about 10% of pregnancies involving monochorionic twins. This complication can be treated with fetoscopic laser coagulation. The procedure could greatly benefit from panorama reconstruction to gain an overview of the placenta. In
[...] Read more.
A Twin-to-Twin Transfusion Syndrome (TTTS) is a condition that occurs in about 10% of pregnancies involving monochorionic twins. This complication can be treated with fetoscopic laser coagulation. The procedure could greatly benefit from panorama reconstruction to gain an overview of the placenta. In previous work we investigated which steps could improve the reconstruction performance for an in-vivo setting. In this work we improved this registration by proposing a stable region detection method as well as extracting matchable features based on a deep-learning approach. Finally, we extracted a measure for the image registration quality and the visibility condition. With experiments we show that the image registration performance is increased and more constant. Using these methods a system can be developed that supports the surgeon during the surgery, by giving feedback and providing a more complete overview of the placenta. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessFeature PaperArticle Glomerulus Classification and Detection Based on Convolutional Neural Networks
J. Imaging 2018, 4(1), 20; https://doi.org/10.3390/jimaging4010020
Received: 6 November 2017 / Revised: 2 January 2018 / Accepted: 8 January 2018 / Published: 16 January 2018
PDF Full-text (16305 KB) | HTML Full-text | XML Full-text
Abstract
Glomerulus classification and detection in kidney tissue segments are key processes in nephropathology used for the correct diagnosis of the diseases. In this paper, we deal with the challenge of automating Glomerulus classification and detection from digitized kidney slide segments using a deep
[...] Read more.
Glomerulus classification and detection in kidney tissue segments are key processes in nephropathology used for the correct diagnosis of the diseases. In this paper, we deal with the challenge of automating Glomerulus classification and detection from digitized kidney slide segments using a deep learning framework. The proposed method applies Convolutional Neural Networks (CNNs) between two classes: Glomerulus and Non-Glomerulus, to detect the image segments belonging to Glomerulus regions. We configure the CNN with the public pre-trained AlexNet model and adapt it to our system by learning from Glomerulus and Non-Glomerulus regions extracted from training slides. Once the model is trained, labeling is performed by applying the CNN classification to the image blocks under analysis. The results of the method indicate that this technique is suitable for correct Glomerulus detection in Whole Slide Images (WSI), showing robustness while reducing false positive and false negative detections. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Surface Mesh Reconstruction from Cardiac MRI Contours
J. Imaging 2018, 4(1), 16; https://doi.org/10.3390/jimaging4010016
Received: 8 November 2017 / Revised: 30 December 2017 / Accepted: 30 December 2017 / Published: 10 January 2018
PDF Full-text (51694 KB) | HTML Full-text | XML Full-text
Abstract
We introduce a tool to build a surface mesh able to deal with sparse, heterogeneous, non-parallel, cross-sectional, non-coincidental contours and show its application to reconstruct surfaces of the heart. In recent years, much research has looked at creating personalised 3D anatomical models of
[...] Read more.
We introduce a tool to build a surface mesh able to deal with sparse, heterogeneous, non-parallel, cross-sectional, non-coincidental contours and show its application to reconstruct surfaces of the heart. In recent years, much research has looked at creating personalised 3D anatomical models of the heart. These models usually incorporate a geometrical reconstruction of the anatomy in order to better understand cardiovascular functions as well as predict different cardiac processes. As MRIs are becoming the standard for cardiac medical imaging, we tested our methodology on cardiac MRI data from standard acquisitions. However, the ability to accurately reconstruct heart anatomy in three dimensions commonly comes with fundamental challenges—notably, the trade-off between data fitting and expected visual appearance. Most current techniques can either require contours from parallel slices or, if multiple slice orientations are used, require an exact match between these contours. In addition, some methods introduce a bias by the use of prior shape models or by trade-offs between the data matching terms and the smoothing terms. Our approach uses a composition of smooth approximations towards the maximization of the data fitting, ensuring a good matching to the input data as well as pleasant interpolation characteristics. To assess our method in the task of cardiac mesh generations, we evaluated its performance on synthetic data obtained from a cardiac statistical shape model as well as on real data. Using a statistical shape model, we simulated standard cardiac MRI acquisitions planes and contour data. We performed a multi-parameter evaluation study using plausible cardiac shapes generated from the model. We also show that long axes contours as well as the most extremal slices (basal and apical) contain the most amount of structural information, and thus should be taken into account when generating anatomically relevant geometrical cardiovascular surfaces. Our method is both used on epicardial and endocardial left ventricle surfaces as well as on the right ventricle. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Breast Density Classification Using Local Quinary Patterns with Various Neighbourhood Topologies
J. Imaging 2018, 4(1), 14; https://doi.org/10.3390/jimaging4010014
Received: 27 October 2017 / Revised: 8 December 2017 / Accepted: 5 January 2018 / Published: 8 January 2018
PDF Full-text (2276 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an extension of work from our previous study by investigating the use of Local Quinary Patterns (LQP) for breast density classification in mammograms on various neighbourhood topologies. The LQP operators are used to capture the texture characteristics of the fibro-glandular
[...] Read more.
This paper presents an extension of work from our previous study by investigating the use of Local Quinary Patterns (LQP) for breast density classification in mammograms on various neighbourhood topologies. The LQP operators are used to capture the texture characteristics of the fibro-glandular disk region ( F G D r o i ) instead of the whole breast area as the majority of current studies have done. We take a multiresolution and multi-orientation approach, investigate the effects of various neighbourhood topologies and select dominant patterns to maximise texture information. Subsequently, the Support Vector Machine classifier is used to perform the classification, and a stratified ten-fold cross-validation scheme is employed to evaluate the performance of the method. The proposed method produced competitive results up to 86.13 % and 82.02 % accuracy based on 322 and 206 mammograms taken from the Mammographic Image Analysis Society (MIAS) and InBreast datasets, which is comparable with the state-of-the-art in the literature. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Range Imaging for Motion Compensation in C-Arm Cone-Beam CT of Knees under Weight-Bearing Conditions
J. Imaging 2018, 4(1), 13; https://doi.org/10.3390/jimaging4010013
Received: 7 November 2017 / Revised: 3 January 2018 / Accepted: 3 January 2018 / Published: 6 January 2018
PDF Full-text (5057 KB) | HTML Full-text | XML Full-text
Abstract
C-arm cone-beam computed tomography (CBCT) has been used recently to acquire images of the human knee joint under weight-bearing conditions to assess knee joint health under load. However, involuntary patient motion during image acquisition leads to severe motion artifacts in the subsequent reconstructions.
[...] Read more.
C-arm cone-beam computed tomography (CBCT) has been used recently to acquire images of the human knee joint under weight-bearing conditions to assess knee joint health under load. However, involuntary patient motion during image acquisition leads to severe motion artifacts in the subsequent reconstructions. The state-of-the-art uses fiducial markers placed on the patient’s knee to compensate for the induced motion artifacts. The placement of markers is time consuming, tedious, and requires user experience, to guarantee reliable motion estimates. To overcome these drawbacks, we recently investigated whether range imaging would allow to track, estimate, and compensate for patient motion using a range camera. We argue that the dense surface information observed by the camera could reveal more information than only a few surface points of the marker-based method. However, the integration of range-imaging with CBCT involves flexibility, such as where to position the camera and what algorithm to align the data with. In this work, three dimensional rigid body motion is estimated for synthetic data acquired with two different range camera trajectories: a static position on the ground and a dynamic position on the C-arm. Motion estimation is evaluated using two different types of point cloud registration algorithms: a pair wise Iterative Closest Point algorithm as well as a probabilistic group wise method. We compare the reconstruction results and the estimated motion signals with the ground truth and the current reference standard, a marker-based approach. To this end, we qualitatively and quantitatively assess image quality. The latter is evaluated using the Structural Similarity (SSIM). We achieved results comparable to the marker-based approach, which highlights the potential of both point set registration methods, for accurately recovering patient motion. The SSIM improved from 0.94 to 0.99 and 0.97 using the static and the dynamic camera trajectory, respectively. Accurate recovery of patient motion resulted in remarkable reduction in motion artifacts in the CBCT reconstructions, which is promising for future work with real data. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Estimating Bacterial and Cellular Load in FCFM Imaging
J. Imaging 2018, 4(1), 11; https://doi.org/10.3390/jimaging4010011
Received: 7 November 2017 / Revised: 13 December 2017 / Accepted: 13 December 2017 / Published: 5 January 2018
PDF Full-text (7607 KB) | HTML Full-text | XML Full-text
Abstract
We address the task of estimating bacterial and cellular load in the human distal lung with fibered confocal fluorescence microscopy (FCFM). In pulmonary FCFM some cells can display autofluorescence, and they appear as disc like objects in the FCFM images, whereas bacteria, although
[...] Read more.
We address the task of estimating bacterial and cellular load in the human distal lung with fibered confocal fluorescence microscopy (FCFM). In pulmonary FCFM some cells can display autofluorescence, and they appear as disc like objects in the FCFM images, whereas bacteria, although not autofluorescent, appear as bright blinking dots when exposed to a targeted smartprobe. Estimating bacterial and cellular load becomes a challenging task due to the presence of background from autofluorescent human lung tissues, i.e., elastin, and imaging artifacts from motion etc. We create a database of annotated images for both these tasks where bacteria and cells were annotated, and use these databases for supervised learning. We extract image patches around each pixel as features, and train a classifier to predict if a bacterium or cell is present at that pixel. We apply our approach on two datasets for detecting bacteria and cells respectively. For the bacteria dataset, we show that the estimated bacterial load increases after introducing the targeted smartprobe in the presence of bacteria. For the cell dataset, we show that the estimated cellular load agrees with a clinician’s assessment. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Reference Tracts and Generative Models for Brain White Matter Tractography
J. Imaging 2018, 4(1), 8; https://doi.org/10.3390/jimaging4010008
Received: 3 November 2017 / Revised: 13 December 2017 / Accepted: 26 December 2017 / Published: 28 December 2017
PDF Full-text (1646 KB) | HTML Full-text | XML Full-text
Abstract
Background: Probabilistic neighborhood tractography aims to automatically segment brain white matter tracts from diffusion magnetic resonance imaging (dMRI) data in different individuals. It uses reference tracts as priors for the shape and length of the tract, and matching models that describe typical deviations
[...] Read more.
Background: Probabilistic neighborhood tractography aims to automatically segment brain white matter tracts from diffusion magnetic resonance imaging (dMRI) data in different individuals. It uses reference tracts as priors for the shape and length of the tract, and matching models that describe typical deviations from these. We evaluated new reference tracts and matching models derived from dMRI data acquired from 80 healthy volunteers, aged 25–64 years. Methods: The new reference tracts and models were tested in 50 healthy older people, aged 71.8 ± 0.4 years. The matching models were further assessed by sampling and visualizing synthetic tracts derived from them. Results: We found that data-generated reference tracts improved the success rate of automatic white matter tract segmentations. We observed an increased rate of visually acceptable tracts, and decreased variation in quantitative parameters when using this approach. Sampling from the matching models demonstrated their quality, independently of the testing data. Conclusions: We have improved the automatic segmentation of brain white matter tracts, and demonstrated that matching models can be successfully transferred to novel data. In many cases, this will bypass the need for training data and make the use of probabilistic neighborhood tractography in small testing datasets newly practicable. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Automatic Detection and Distinction of Retinal Vessel Bifurcations and Crossings in Colour Fundus Photography
J. Imaging 2018, 4(1), 4; https://doi.org/10.3390/jimaging4010004
Received: 7 November 2017 / Revised: 12 December 2017 / Accepted: 14 December 2017 / Published: 22 December 2017
PDF Full-text (3084 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The analysis of retinal blood vessels present in fundus images, and the addressing of problems such as blood clot location, is important to undertake accurate and appropriate treatment of the vessels. Such tasks are hampered by the challenge of accurately tracing back problems
[...] Read more.
The analysis of retinal blood vessels present in fundus images, and the addressing of problems such as blood clot location, is important to undertake accurate and appropriate treatment of the vessels. Such tasks are hampered by the challenge of accurately tracing back problems along vessels to their source. This is due to the unresolved issue of distinguishing automatically between vessel bifurcations and vessel crossings in colour fundus photographs. In this paper, we present a new technique for addressing this problem using a convolutional neural network approach to firstly locate vessel bifurcations and crossings and then to classifying them as either bifurcations or crossings. Our method achieves high accuracies for junction detection and classification on the DRIVE dataset and we show further validation on an unseen dataset from which no data has been used for training. Combined with work in automated segmentation, this method has the potential to facilitate: reconstruction of vessel topography, classification of veins and arteries and automated localisation of blood clots and other disease symptoms leading to improved management of eye disease. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Texture Based Quality Analysis of Simulated Synthetic Ultrasound Images Using Local Binary Patterns
J. Imaging 2018, 4(1), 3; https://doi.org/10.3390/jimaging4010003
Received: 28 October 2017 / Revised: 14 December 2017 / Accepted: 18 December 2017 / Published: 21 December 2017
PDF Full-text (4890 KB) | HTML Full-text | XML Full-text
Abstract
Speckle noise reduction is an important area of research in the field of ultrasound image processing. Several algorithms for speckle noise characterization and analysis have been recently proposed in the area. Synthetic ultrasound images can play a key role in noise evaluation methods
[...] Read more.
Speckle noise reduction is an important area of research in the field of ultrasound image processing. Several algorithms for speckle noise characterization and analysis have been recently proposed in the area. Synthetic ultrasound images can play a key role in noise evaluation methods as they can be used to generate a variety of speckle noise models under different interpolation and sampling schemes, and can also provide valuable ground truth data for estimating the accuracy of the chosen methods. However, not much work has been done in the area of modeling synthetic ultrasound images, and in simulating speckle noise generation to get images that are as close as possible to real ultrasound images. An important aspect of simulated synthetic ultrasound images is the requirement for extensive quality assessment for ensuring that they have the texture characteristics and gray-tone features of real images. This paper presents texture feature analysis of synthetic ultrasound images using local binary patterns (LBP) and demonstrates the usefulness of a set of LBP features for image quality assessment. Experimental results presented in the paper clearly show how these features could provide an accurate quality metric that correlates very well with subjective evaluations performed by clinical experts. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Segmentation and Shape Analysis of Macrophages Using Anglegram Analysis
J. Imaging 2018, 4(1), 2; https://doi.org/10.3390/jimaging4010002
Received: 7 November 2017 / Revised: 15 December 2017 / Accepted: 16 December 2017 / Published: 21 December 2017
PDF Full-text (23329 KB) | HTML Full-text | XML Full-text
Abstract
Cell migration is crucial in many processes of development and maintenance of multicellular organisms and it can also be related to disease, e.g., Cancer metastasis, when cells migrate to organs different to where they originate. A precise analysis of the cell shapes in
[...] Read more.
Cell migration is crucial in many processes of development and maintenance of multicellular organisms and it can also be related to disease, e.g., Cancer metastasis, when cells migrate to organs different to where they originate. A precise analysis of the cell shapes in biological studies could lead to insights about migration. However, in some cases, the interaction and overlap of cells can complicate the detection and interpretation of their shapes. This paper describes an algorithm to segment and analyse the shape of macrophages in fluorescent microscopy image sequences, and compares the segmentation of overlapping cells through different algorithms. A novel 2D matrix with multiscale angle variation, called the anglegram, based on the angles between points of the boundary of an object, is used for this purpose. The anglegram is used to find junctions of cells and applied in two different applications: (i) segmentation of overlapping cells and for non-overlapping cells; (ii) detection of the “corners” or pointy edges in the shapes. The functionalities of the anglegram were tested and validated with synthetic data and on fluorescently labelled macrophages observed on embryos of Drosophila melanogaster. The information that can be extracted from the anglegram shows a good promise for shape determination and analysis, whether this involves overlapping or non-overlapping objects. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Restoration of Bi-Contrast MRI Data for Intensity Uniformity with Bayesian Coring of Co-Occurrence Statistics
J. Imaging 2017, 3(4), 67; https://doi.org/10.3390/jimaging3040067
Received: 30 August 2017 / Revised: 7 December 2017 / Accepted: 12 December 2017 / Published: 15 December 2017
PDF Full-text (1482 KB) | HTML Full-text | XML Full-text
Abstract
The reconstruction of MRI data assumes a uniform radio-frequency field. However, in practice, the radio-frequency field is inhomogeneous and leads to anatomically inconsequential intensity non-uniformities across an image. An anatomic region can be imaged with multiple contrasts reconstructed independently and be suffering from
[...] Read more.
The reconstruction of MRI data assumes a uniform radio-frequency field. However, in practice, the radio-frequency field is inhomogeneous and leads to anatomically inconsequential intensity non-uniformities across an image. An anatomic region can be imaged with multiple contrasts reconstructed independently and be suffering from different non-uniformities. These artifacts can complicate the further automated analysis of the images. A method is presented for the joint intensity uniformity restoration of two such images. The effect of the intensity distortion on the auto-co-occurrence statistics of each image as well as on the joint-co-occurrence statistics of the two images is modeled and used for their non-stationary restoration followed by their back-projection to the images. Several constraints that ensure a stable restoration are also imposed. Moreover, the method considers the inevitable differences between the signal regions of the two images. The method has been evaluated extensively with BrainWeb phantom brain data as well as with brain anatomic data from the Human Connectome Project (HCP) and with data of Parkinson’s disease patients. The performance of the proposed method has been compared with that of the N4ITK tool. The proposed method increases tissues contrast at least 4 . 62 times more than the N4ITK tool for the BrainWeb images. The dynamic range with the N4ITK method for the same images is increased by up to +29.77%, whereas, for the proposed method, it has a corresponding limited decrease of - 1 . 15 % , as expected. The validation has demonstrated the accuracy and stability of the proposed method and hence its ability to reduce the requirements for additional calibration scans. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Deep Learning vs. Conventional Machine Learning: Pilot Study of WMH Segmentation in Brain MRI with Absence or Mild Vascular Pathology
J. Imaging 2017, 3(4), 66; https://doi.org/10.3390/jimaging3040066
Received: 7 November 2017 / Revised: 7 December 2017 / Accepted: 12 December 2017 / Published: 14 December 2017
PDF Full-text (3128 KB) | HTML Full-text | XML Full-text
Abstract
In the wake of the use of deep learning algorithms in medical image analysis, we compared performance of deep learning algorithms, namely the deep Boltzmann machine (DBM), convolutional encoder network (CEN) and patch-wise convolutional neural network (patch-CNN), with two conventional machine learning schemes:
[...] Read more.
In the wake of the use of deep learning algorithms in medical image analysis, we compared performance of deep learning algorithms, namely the deep Boltzmann machine (DBM), convolutional encoder network (CEN) and patch-wise convolutional neural network (patch-CNN), with two conventional machine learning schemes: Support vector machine (SVM) and random forest (RF), for white matter hyperintensities (WMH) segmentation on brain MRI with mild or no vascular pathology. We also compared all these approaches with a method in the Lesion Segmentation Tool public toolbox named lesion growth algorithm (LGA). We used a dataset comprised of 60 MRI data from 20 subjects in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, each scanned once every year during three consecutive years. Spatial agreement score, receiver operating characteristic and precision-recall performance curves, volume disagreement score, agreement with intra-/inter-observer reliability measurements and visual evaluation were used to find the best configuration of each learning algorithm for WMH segmentation. By using optimum threshold values for the probabilistic output from each algorithm to produce binary masks of WMH, we found that SVM and RF produced good results for medium to very large WMH burden but deep learning algorithms performed generally better than conventional ones in most evaluations. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessFeature PaperArticle Mereotopological Correction of Segmentation Errors in Histological Imaging
J. Imaging 2017, 3(4), 63; https://doi.org/10.3390/jimaging3040063
Received: 30 October 2017 / Revised: 5 December 2017 / Accepted: 6 December 2017 / Published: 12 December 2017
PDF Full-text (1337 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we describe mereotopological methods to programmatically correct image segmentation errors, in particular those that fail to fulfil expected spatial relations in digitised histological scenes. The proposed approach exploits a spatial logic called discrete mereotopology to integrate a number of qualitative
[...] Read more.
In this paper we describe mereotopological methods to programmatically correct image segmentation errors, in particular those that fail to fulfil expected spatial relations in digitised histological scenes. The proposed approach exploits a spatial logic called discrete mereotopology to integrate a number of qualitative spatial reasoning and constraint satisfaction methods into imaging procedures. Eight mereotopological relations defined on binary region pairs are represented as nodes in a set of 20 directed graphs, where the node-to-node graph edges encode the possible transitions between the spatial relations after set-theoretic and discrete topological operations on the regions are applied. The graphs allow one to identify sequences of operations that applied to regions of a given relation, and enables one to resegment an image that fails to conform to a valid histological model into one that does. Examples of the methods are presented using images of H&E-stained human carcinoma cell line cultures. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Graphical abstract

Open AccessArticle Epithelium and Stroma Identification in Histopathological Images Using Unsupervised and Semi-Supervised Superpixel-Based Segmentation
J. Imaging 2017, 3(4), 61; https://doi.org/10.3390/jimaging3040061
Received: 27 October 2017 / Revised: 5 December 2017 / Accepted: 6 December 2017 / Published: 11 December 2017
Cited by 1 | PDF Full-text (6923 KB) | HTML Full-text | XML Full-text
Abstract
We present superpixel-based segmentation frameworks for unsupervised and semi-supervised epithelium-stroma identification in histopathological images or oropharyngeal tissue micro arrays. A superpixel segmentation algorithm is initially used to split-up the image into binary regions (superpixels) and their colour features are extracted and fed into
[...] Read more.
We present superpixel-based segmentation frameworks for unsupervised and semi-supervised epithelium-stroma identification in histopathological images or oropharyngeal tissue micro arrays. A superpixel segmentation algorithm is initially used to split-up the image into binary regions (superpixels) and their colour features are extracted and fed into several base clustering algorithms with various parameter initializations. Two Consensus Clustering (CC) formulations are then used: the Evidence Accumulation Clustering (EAC) and the voting-based consensus function. These combine the base clustering outcomes to obtain a more robust detection of tissue compartments than the base clustering methods on their own. For the voting-based function, a technique is introduced to generate consistent labellings across the base clustering results. The obtained CC result is then utilized to build a self-training Semi-Supervised Classification (SSC) model. Unlike supervised segmentations, which rely on large number of labelled training images, our SSC approach performs a quality segmentation while relying on few labelled samples. Experiments conducted on forty-five hand-annotated images of oropharyngeal cancer tissue microarrays show that (a) the CC algorithm generates more accurate and stable results than individual clustering algorithms; (b) the clustering performance of the voting-based function outperforms the existing EAC; and (c) the proposed SSC algorithm outperforms the supervised methods, which is trained with only a few labelled instances. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Graphical abstract

Open AccessArticle Rapid Interactive and Intuitive Segmentation of 3D Medical Images Using Radial Basis Function Interpolation
J. Imaging 2017, 3(4), 56; https://doi.org/10.3390/jimaging3040056
Received: 18 October 2017 / Revised: 25 November 2017 / Accepted: 28 November 2017 / Published: 30 November 2017
PDF Full-text (2167 KB) | HTML Full-text | XML Full-text
Abstract
Segmentation is one of the most important parts of medical image analysis. Manual segmentation is very cumbersome, time-consuming, and prone to inter-observer variability. Fully automatic segmentation approaches require a large amount of labeled training data and may fail in difficult or abnormal cases.
[...] Read more.
Segmentation is one of the most important parts of medical image analysis. Manual segmentation is very cumbersome, time-consuming, and prone to inter-observer variability. Fully automatic segmentation approaches require a large amount of labeled training data and may fail in difficult or abnormal cases. In this work, we propose a new method for 2D segmentation of individual slices and 3D interpolation of the segmented slices. The Smart Brush functionality quickly segments the region of interest in a few 2D slices. Given these annotated slices, our adapted formulation of Hermite radial basis functions reconstructs the 3D surface. Effective interactions with less number of equations accelerate the performance and, therefore, a real-time and an intuitive, interactive segmentation of 3D objects can be supported effectively. The proposed method is evaluated on 12 clinical 3D magnetic resonance imaging data sets and are compared to gold standard annotations of the left ventricle from a clinical expert. The automatic evaluation of the 2D Smart Brush resulted in an average Dice coefficient of 0.88 ± 0.09 for the individual slices. For the 3D interpolation using Hermite radial basis functions, an average Dice coefficient of 0.94 ± 0.02 is achieved. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Graphical abstract

Open AccessArticle Modelling of Orthogonal Craniofacial Profiles
J. Imaging 2017, 3(4), 55; https://doi.org/10.3390/jimaging3040055
Received: 20 October 2017 / Revised: 18 November 2017 / Accepted: 23 November 2017 / Published: 30 November 2017
PDF Full-text (2056 KB) | HTML Full-text | XML Full-text
Abstract
We present a fully-automatic image processing pipeline to build a set of 2D morphable models of three craniofacial profiles from orthogonal viewpoints, side view, front view and top view, using a set of 3D head surface images. Subjects in this dataset wear a
[...] Read more.
We present a fully-automatic image processing pipeline to build a set of 2D morphable models of three craniofacial profiles from orthogonal viewpoints, side view, front view and top view, using a set of 3D head surface images. Subjects in this dataset wear a close-fitting latex cap to reveal the overall skull shape. Texture-based 3D pose normalization and facial landmarking are applied to extract the profiles from 3D raw scans. Fully-automatic profile annotation, subdivision and registration methods are used to establish dense correspondence among sagittal profiles. The collection of sagittal profiles in dense correspondence are scaled and aligned using Generalised Procrustes Analysis (GPA), before applying principal component analysis to generate a morphable model. Additionally, we propose a new alternative alignment called the Ellipse Centre Nasion (ECN) method. Our model is used in a case study of craniosynostosis intervention outcome evaluation, and the evaluation reveals that the proposed model achieves state-of-the-art results. We make publicly available both the morphable models and the profile dataset used to construct it. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Back to Top