Next Issue
Previous Issue

Table of Contents

J. Imaging, Volume 5, Issue 3 (March 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Thanks to a two-pass noise removal process based on Gaussian blurring of the original frames using [...] Read more.
View options order results:
result details:
Displaying articles 1-10
Export citation of selected articles as:
Open AccessArticle
Tracking and Linking of Microparticle Trajectories During Mode-Coupling Induced Melting in a Two-Dimensional Complex Plasma Crystal
J. Imaging 2019, 5(3), 41; https://doi.org/10.3390/jimaging5030041
Received: 28 January 2019 / Revised: 11 March 2019 / Accepted: 12 March 2019 / Published: 16 March 2019
Viewed by 951 | PDF Full-text (2885 KB) | HTML Full-text | XML Full-text
Abstract
In this article, a strategy to track microparticles and link their trajectories adapted to the study of the melting of a quasi two-dimensional complex plasma crystal induced by the mode-coupling instability is presented. Because of the three-dimensional nature of the microparticle motions and [...] Read more.
In this article, a strategy to track microparticles and link their trajectories adapted to the study of the melting of a quasi two-dimensional complex plasma crystal induced by the mode-coupling instability is presented. Because of the three-dimensional nature of the microparticle motions and the inhomogeneities of the illuminating laser light sheet, the scattered light intensity can change significantly between two frames, making the detection of the microparticles and the linking of their trajectories quite challenging. Thanks to a two-pass noise removal process based on Gaussian blurring of the original frames using two different kernel widths, the signal-to-noise ratio was increased to a level that allowed a better intensity thresholding of different regions of the images and, therefore, the tracking of the poorly illuminated microparticles. Then, by predicting the positions of the microparticles based on their previous positions, long particle trajectories could be reconstructed, allowing accurate measurement of the evolution of the microparticle energies and the evolution of the monolayer properties. Full article
(This article belongs to the Special Issue Image Processing in Soft Condensed Matter)
Figures

Figure 1

Open AccessArticle
Visualisation and Analysis of Speech Production with Electropalatography
J. Imaging 2019, 5(3), 40; https://doi.org/10.3390/jimaging5030040
Received: 16 January 2019 / Revised: 6 March 2019 / Accepted: 9 March 2019 / Published: 15 March 2019
Viewed by 1055 | PDF Full-text (15105 KB) | HTML Full-text | XML Full-text
Abstract
The process of speech production, i.e., the compression of air in the lungs, the vibration activity of the larynx, and the movement of the articulators, is of great interest in phonetics, phonology, and psychology. One technique by which speech production is analysed is [...] Read more.
The process of speech production, i.e., the compression of air in the lungs, the vibration activity of the larynx, and the movement of the articulators, is of great interest in phonetics, phonology, and psychology. One technique by which speech production is analysed is electropalatography, in which an artificial palate, moulded to the speaker’s hard palate, is introduced in the mouth. The palate contains a grid of electrodes, which monitor the spatial and temporal pattern of contact between the tongue and the palate during speech production. The output is a time sequence of images, known as palatograms, which show the 2D distribution of electrode activation. This paper describes a series of tools for the visualisation and analysis of palatograms and their associated sound signals. The tools are developed as Matlab® routines and released as an open-source toolbox. The particular focus is the analysis of the amount and direction of left–right asymmetry in tongue–palate contact during the production of different speech sounds. Asymmetry in the articulation of speech, as measured by electropalatography, may be related to the language under consideration, the speaker’s anatomy, irregularities in the palate manufacture, or speaker handedness (i.e., left or right). In addition, a pipeline for the segmentation and analysis of a three-dimensional computed tomography data set of an artificial palate is described and demonstrated. The segmentation procedure provides quantitative information about asymmetry that is due to a combination of speaker anatomy (the shape of the hard palate) and the positioning of the electrodes during manufacture of the artificial palate. The tools provided here should be useful in future studies of electropalatography. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Figures

Figure 1

Open AccessArticle
Image Registration with Particles, Examplified with the Complex Plasma Laboratory PK-4 on Board the International Space Station
J. Imaging 2019, 5(3), 39; https://doi.org/10.3390/jimaging5030039
Received: 28 January 2019 / Revised: 26 February 2019 / Accepted: 6 March 2019 / Published: 14 March 2019
Viewed by 885 | PDF Full-text (2258 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Often, in complex plasmas and beyond, images of particles are recorded with a side-by-side camera setup. These images ideally need to be joined to create a large combined image. This is, for instance, the case in the PK-4 Laboratory on board the International [...] Read more.
Often, in complex plasmas and beyond, images of particles are recorded with a side-by-side camera setup. These images ideally need to be joined to create a large combined image. This is, for instance, the case in the PK-4 Laboratory on board the International Space Station (the next generation of complex plasma laboratories in space). It enables observations of microparticles embedded in an elongated low temperature DC plasma tube. The microparticles acquire charges from the surrounding plasma and interact strongly with each other. A sheet of laser light illuminates the microparticles, and two cameras record the motion of the microparticles inside this laser sheet. The fields of view of these cameras slightly overlap. In this article, we present two methods to combine the associated image pairs into one image, namely the SimpleElastix toolkit based on comparing the mutual information and a method based on detecting the particle positions. We found that the method based on particle positions performs slightly better than that based on the mutual information, and conclude with recommendations for other researchers wanting to solve a related problem. Full article
(This article belongs to the Special Issue Image Processing in Soft Condensed Matter)
Figures

Figure 1

Open AccessArticle
High-Level Synthesis of Online K-Means Clustering Hardware for a Real-Time Image Processing Pipeline
J. Imaging 2019, 5(3), 38; https://doi.org/10.3390/jimaging5030038
Received: 29 November 2018 / Revised: 6 March 2019 / Accepted: 7 March 2019 / Published: 14 March 2019
Cited by 1 | Viewed by 985 | PDF Full-text (5883 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The growing need for smart surveillance solutions requires that modern video capturing devices to be equipped with advance features, such as object detection, scene characterization, and event detection, etc. Image segmentation into various connected regions is a vital pre-processing step in these and [...] Read more.
The growing need for smart surveillance solutions requires that modern video capturing devices to be equipped with advance features, such as object detection, scene characterization, and event detection, etc. Image segmentation into various connected regions is a vital pre-processing step in these and other advanced computer vision algorithms. Thus, the inclusion of a hardware accelerator for this task in the conventional image processing pipeline inevitably reduces the workload for more advanced operations downstream. Moreover, design entry by using high-level synthesis tools is gaining popularity for the facilitation of system development under a rapid prototyping paradigm. To address these design requirements, we have developed a hardware accelerator for image segmentation, based on an online K-Means algorithm using a Simulink high-level synthesis tool. The developed hardware uses a standard pixel streaming protocol, and it can be readily inserted into any image processing pipeline as an Intellectual Property (IP) core on a Field Programmable Gate Array (FPGA). Furthermore, the proposed design reduces the hardware complexity of the conventional architectures by employing a weighted instead of a moving average to update the clusters. Experimental evidence has also been provided to demonstrate that the proposed weighted average-based approach yields better results than the conventional moving average on test video sequences. The synthesized hardware has been tested in real-time environment to process Full HD video at 26.5 fps, while the estimated dynamic power consumption is less than 90 mW on the Xilinx Zynq-7000 SOC. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs) Printed Edition available
Figures

Figure 1

Open AccessArticle
Deep Learning for Breast Cancer Diagnosis from Mammograms—A Comparative Study
J. Imaging 2019, 5(3), 37; https://doi.org/10.3390/jimaging5030037
Received: 27 January 2019 / Revised: 5 March 2019 / Accepted: 7 March 2019 / Published: 13 March 2019
Viewed by 1250 | PDF Full-text (1045 KB) | HTML Full-text | XML Full-text
Abstract
Deep convolutional neural networks (CNNs) are investigated in the context of computer-aided diagnosis (CADx) of breast cancer. State-of-the-art CNNs are trained and evaluated on two mammographic datasets, consisting of ROIs depicting benign or malignant mass lesions. The performance evaluation of each examined network [...] Read more.
Deep convolutional neural networks (CNNs) are investigated in the context of computer-aided diagnosis (CADx) of breast cancer. State-of-the-art CNNs are trained and evaluated on two mammographic datasets, consisting of ROIs depicting benign or malignant mass lesions. The performance evaluation of each examined network is addressed in two training scenarios: the first involves initializing the network with pre-trained weights, while for the second the networks are initialized in a random fashion. Extensive experimental results show the superior performance achieved in the case of fine-tuning a pretrained network compared to training from scratch. Full article
(This article belongs to the Special Issue Modern Advances in Image Fusion)
Figures

Figure 1

Open AccessArticle
Identification of the Interface in a Binary Complex Plasma Using Machine Learning
J. Imaging 2019, 5(3), 36; https://doi.org/10.3390/jimaging5030036
Received: 18 February 2019 / Revised: 26 February 2019 / Accepted: 6 March 2019 / Published: 12 March 2019
Viewed by 856 | PDF Full-text (1991 KB) | HTML Full-text | XML Full-text
Abstract
A binary complex plasma consists of two different types of dust particles in an ionized gas. Due to the spinodal decomposition and force imbalance, particles of different masses and diameters are typically phase separated, resulting in an interface. Both external excitation and internal [...] Read more.
A binary complex plasma consists of two different types of dust particles in an ionized gas. Due to the spinodal decomposition and force imbalance, particles of different masses and diameters are typically phase separated, resulting in an interface. Both external excitation and internal instability may cause the interface to move with time. Support vector machine (SVM) is a supervised machine learning method that can be very effective for multi-class classification. We applied an SVM classification method based on image brightness to locate the interface in a binary complex plasma. Taking the scaled mean and variance as features, three areas, namely small particles, big particles and plasma without dust particles, were distinguished, leading to the identification of the interface between small and big particles. Full article
(This article belongs to the Special Issue Image Processing in Soft Condensed Matter)
Figures

Figure 1

Open AccessArticle
Analysis of Image Feature Characteristics for Automated Scoring of HER2 in Histology Slides
J. Imaging 2019, 5(3), 35; https://doi.org/10.3390/jimaging5030035
Received: 16 December 2018 / Revised: 1 March 2019 / Accepted: 6 March 2019 / Published: 10 March 2019
Viewed by 1004 | PDF Full-text (3680 KB) | HTML Full-text | XML Full-text
Abstract
The evaluation of breast cancer grades in immunohistochemistry (IHC) slides takes into account various types of visual markers and morphological features of stained membrane regions. Digital pathology algorithms using whole slide images (WSIs) of histology slides have recently been finding several applications in [...] Read more.
The evaluation of breast cancer grades in immunohistochemistry (IHC) slides takes into account various types of visual markers and morphological features of stained membrane regions. Digital pathology algorithms using whole slide images (WSIs) of histology slides have recently been finding several applications in such computer-assisted evaluations. Features that are directly related to biomarkers used by pathologists are generally preferred over the pixel values of entire images, even though the latter has more information content. This paper explores in detail various types of feature measurements that are suitable for the automated scoring of human epidermal growth factor receptor 2 (HER2) in histology slides. These are intensity features known as characteristic curves, texture features in the form of uniform local binary patterns (ULBPs), morphological features specifying connectivity of regions, and first-order statistical features of the overall intensity distribution. This paper considers important properties of the above features and outlines methods for reducing information redundancy, maximizing inter-class separability, and improving classification accuracy in the combined feature set. This paper also presents a detailed experimental analysis performed using the aforementioned features on a WSI dataset of IHC stained slides. Full article
(This article belongs to the Special Issue Medical Image Understanding and Analysis 2018)
Figures

Figure 1

Open AccessArticle
High-Throughput Line Buffer Microarchitecture for Arbitrary Sized Streaming Image Processing
J. Imaging 2019, 5(3), 34; https://doi.org/10.3390/jimaging5030034
Received: 21 January 2019 / Revised: 25 February 2019 / Accepted: 25 February 2019 / Published: 6 March 2019
Cited by 1 | Viewed by 1020 | PDF Full-text (1993 KB) | HTML Full-text | XML Full-text
Abstract
Parallel hardware designed for image processing promotes vision-guided intelligent applications. With the advantages of high-throughput and low-latency, streaming architecture on FPGA is especially attractive to real-time image processing. Notably, many real-world applications, such as region of interest (ROI) detection, demand the ability to [...] Read more.
Parallel hardware designed for image processing promotes vision-guided intelligent applications. With the advantages of high-throughput and low-latency, streaming architecture on FPGA is especially attractive to real-time image processing. Notably, many real-world applications, such as region of interest (ROI) detection, demand the ability to process images continuously at different sizes and resolutions in hardware without interruptions. FPGA is especially suitable for implementation of such flexible streaming architecture, but most existing solutions require run-time reconfiguration, and hence cannot achieve seamless image size-switching. In this paper, we propose a dynamically-programmable buffer architecture (D-SWIM) based on the Stream-Windowing Interleaved Memory (SWIM) architecture to realize image processing on FPGA for image streams at arbitrary sizes defined at run time. D-SWIM redefines the way that on-chip memory is organized and controlled, and the hardware adapts to arbitrary image size with sub-100 ns delay that ensures minimum interruptions to the image processing at a high frame rate. Compared to the prior SWIM buffer for high-throughput scenarios, D-SWIM achieved dynamic programmability with only a slight overhead on logic resource usage, but saved up to 56 % of the BRAM resource. The D-SWIM buffer achieves a max operating frequency of 329.5 MHz and reduction in power consumption by 45.7 % comparing with the SWIM scheme. Real-world image processing applications, such as 2D-Convolution and the Harris Corner Detector, have also been used to evaluate D-SWIM’s performance, where a pixel throughput of 4.5 Giga Pixel/s and 4.2 Giga Pixel/s were achieved respectively in each case. Compared to the implementation with prior streaming frameworks, the D-SWIM-based design not only realizes seamless image size-switching, but also improves hardware efficiency up to 30 × . Full article
(This article belongs to the Special Issue Image Processing Using FPGAs) Printed Edition available
Figures

Figure 1

Open AccessArticle
Scalable Database Indexing and Fast Image Retrieval Based on Deep Learning and Hierarchically Nested Structure Applied to Remote Sensing and Plant Biology
J. Imaging 2019, 5(3), 33; https://doi.org/10.3390/jimaging5030033
Received: 6 November 2018 / Revised: 18 January 2019 / Accepted: 18 February 2019 / Published: 1 March 2019
Viewed by 1059 | PDF Full-text (6274 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Digitalisation has opened a wealth of new data opportunities by revolutionizing how images are captured. Although the cost of data generation is no longer a major concern, the data management and processing have become a bottleneck. Any successful visual trait system requires automated [...] Read more.
Digitalisation has opened a wealth of new data opportunities by revolutionizing how images are captured. Although the cost of data generation is no longer a major concern, the data management and processing have become a bottleneck. Any successful visual trait system requires automated data structuring and a data retrieval model to manage, search, and retrieve unstructured and complex image data. This paper investigates a highly scalable and computationally efficient image retrieval system for real-time content-based searching through large-scale image repositories in the domain of remote sensing and plant biology. Images are processed independently without considering any relevant context between sub-sets of images. We utilize a deep Convolutional Neural Network (CNN) model as a feature extractor to derive deep feature representations from the imaging data. In addition, we propose an effective scheme to optimize data structure that can facilitate faster querying at search time based on the hierarchically nested structure and recursive similarity measurements. A thorough series of tests were carried out for plant identification and high-resolution remote sensing data to evaluate the accuracy and the computational efficiency of the proposed approach against other content-based image retrieval (CBIR) techniques, such as the bag of visual words (BOVW) and multiple feature fusion techniques. The results demonstrate that the proposed scheme is effective and considerably faster than conventional indexing structures. Full article
(This article belongs to the Special Issue AI Approaches to Biological Image Analysis)
Figures

Figure 1

Open AccessArticle
Multiple-Exposure Image Fusion for HDR Image Synthesis Using Learned Analysis Transformations
J. Imaging 2019, 5(3), 32; https://doi.org/10.3390/jimaging5030032
Received: 13 December 2018 / Revised: 20 January 2019 / Accepted: 20 February 2019 / Published: 26 February 2019
Viewed by 1091 | PDF Full-text (24652 KB) | HTML Full-text | XML Full-text
Abstract
Modern imaging applications have increased the demand for High-Definition Range (HDR) imaging. Nonetheless, HDR imaging is not easily available with low-cost imaging sensors, since their dynamic range is rather limited. A viable solution to HDR imaging via low-cost imaging sensors is the synthesis [...] Read more.
Modern imaging applications have increased the demand for High-Definition Range (HDR) imaging. Nonetheless, HDR imaging is not easily available with low-cost imaging sensors, since their dynamic range is rather limited. A viable solution to HDR imaging via low-cost imaging sensors is the synthesis of multiple-exposure images. A low-cost sensor can capture the observed scene at multiple-exposure settings and an image-fusion algorithm can combine all these images to form an increased dynamic range image. In this work, two image-fusion methods are combined to tackle multiple-exposure fusion. The luminance channel is fused using the Mitianoudis and Stathaki (2008) method, while the color channels are combined using the method proposed by Mertens et al. (2007). The proposed fusion algorithm performs well without halo artifacts that exist in other state-of-the-art methods. This paper is an extension version of a conference, with more analysis on the derived method and more experimental results that confirm the validity of the method. Full article
(This article belongs to the Special Issue Modern Advances in Image Fusion)
Figures

Figure 1

J. Imaging EISSN 2313-433X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top