Previous Issue

Table of Contents

J. Imaging, Volume 4, Issue 10 (October 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-12
Export citation of selected articles as:
Open AccessArticle Signed Real-Time Delay Multiply and Sum Beamforming for Multispectral Photoacoustic Imaging
J. Imaging 2018, 4(10), 121; https://doi.org/10.3390/jimaging4100121 (registering DOI)
Received: 11 September 2018 / Revised: 9 October 2018 / Accepted: 11 October 2018 / Published: 17 October 2018
PDF Full-text (1855 KB) | HTML Full-text | XML Full-text
Abstract
Reconstruction of photoacoustic (PA) images acquired with clinical ultrasound transducers is usually performed using the Delay and Sum (DAS) beamforming algorithm. Recently, a variant of DAS, referred to as Delay Multiply and Sum (DMAS) beamforming has been shown to provide increased contrast, signal-to-noise
[...] Read more.
Reconstruction of photoacoustic (PA) images acquired with clinical ultrasound transducers is usually performed using the Delay and Sum (DAS) beamforming algorithm. Recently, a variant of DAS, referred to as Delay Multiply and Sum (DMAS) beamforming has been shown to provide increased contrast, signal-to-noise ratio (SNR) and resolution in PA imaging. The main reasons for the use of DAS beamforming in photoacoustics are its simple implementation, real-time capability, and the linearity of the beamformed image to the PA signal. This is crucial for the identification of different chromophores in multispectral PA applications. In contrast, current DMAS implementations are not responsive to the full spectrum of sound frequencies from a photoacoustic source and have not been shown to provide a reconstruction linear to the PA signal. Furthermore, due to its increased computational complexity, DMAS has not been shown yet to work in real-time. Here, we present an open-source real-time variant of the DMAS algorithm, signed DMAS (sDMAS), that ensures linearity in the original PA signal response while providing the increased image quality of DMAS. We show the applicability of sDMAS for multispectral PA applications, in vitro and in vivo. The sDMAS and reference DAS algorithms were integrated in the open-source Medical Imaging Interaction Toolkit (MITK) and are available as real-time capable implementations. Full article
(This article belongs to the Special Issue Biomedical Photoacoustic Imaging: Technologies and Methods)
Figures

Figure 1

Open AccessArticle In the Eye of the Deceiver: Analyzing Eye Movements as a Cue to Deception
J. Imaging 2018, 4(10), 120; https://doi.org/10.3390/jimaging4100120
Received: 18 September 2018 / Revised: 4 October 2018 / Accepted: 12 October 2018 / Published: 16 October 2018
PDF Full-text (1416 KB) | HTML Full-text | XML Full-text
Abstract
Deceit occurs in daily life and, even from an early age, children can successfully deceive their parents. Therefore, numerous book and psychological studies have been published to help people decipher the facial cues to deceit. In this study, we tackle the problem of
[...] Read more.
Deceit occurs in daily life and, even from an early age, children can successfully deceive their parents. Therefore, numerous book and psychological studies have been published to help people decipher the facial cues to deceit. In this study, we tackle the problem of deceit detection by analyzing eye movements: blinks, saccades and gaze direction. Recent psychological studies have shown that the non-visual saccadic eye movement rate is higher when people lie. We propose a fast and accurate framework for eye tracking and eye movement recognition and analysis. The proposed system tracks the position of the iris, as well as the eye corners (the outer shape of the eye). Next, in an offline analysis stage, the trajectory of these eye features is analyzed in order to recognize and measure various cues which can be used as an indicator of deception: the blink rate, the gaze direction and the saccadic eye movement rate. On the task of iris center localization, the method achieves within pupil localization in 91.47% of the cases. For blink localization, we obtained an accuracy of 99.3% on the difficult EyeBlink8 dataset. In addition, we proposed a novel metric, the normalized blink rate deviation to stop deceitful behavior based on blink rate. Using this metric and a simple decision stump, the deceitful answers from the Silesian Face database were recognized with an accuracy of 96.15%. Full article
Figures

Figure 1

Open AccessArticle Objective Classes for Micro-Facial Expression Recognition
J. Imaging 2018, 4(10), 119; https://doi.org/10.3390/jimaging4100119
Received: 1 September 2018 / Revised: 8 October 2018 / Accepted: 9 October 2018 / Published: 15 October 2018
PDF Full-text (1192 KB) | HTML Full-text | XML Full-text
Abstract
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are
[...] Read more.
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition. Full article
Figures

Figure 1

Open AccessArticle Fusing Multiple Multiband Images
J. Imaging 2018, 4(10), 118; https://doi.org/10.3390/jimaging4100118
Received: 21 August 2018 / Revised: 5 October 2018 / Accepted: 8 October 2018 / Published: 12 October 2018
PDF Full-text (4646 KB) | HTML Full-text | XML Full-text
Abstract
High-resolution hyperspectral images are in great demand but hard to acquire due to several existing fundamental and technical limitations. A practical way around this is to fuse multiple multiband images of the same scene with complementary spatial and spectral resolutions. We propose an
[...] Read more.
High-resolution hyperspectral images are in great demand but hard to acquire due to several existing fundamental and technical limitations. A practical way around this is to fuse multiple multiband images of the same scene with complementary spatial and spectral resolutions. We propose an algorithm for fusing an arbitrary number of coregistered multiband, i.e., panchromatic, multispectral, or hyperspectral, images through estimating the endmember and their abundances in the fused image. To this end, we use the forward observation and linear mixture models and formulate an appropriate maximum-likelihood estimation problem. Then, we regularize the problem via a vector total-variation penalty and the non-negativity/sum-to-one constraints on the endmember abundances and solve it using the alternating direction method of multipliers. The regularization facilitates exploiting the prior knowledge that natural images are mostly composed of piecewise smooth regions with limited abrupt changes, i.e., edges, as well as coping with potential ill-posedness of the fusion problem. Experiments with multiband images constructed from real-world hyperspectral images reveal the superior performance of the proposed algorithm in comparison with the state-of-the-art algorithms, which need to be used in tandem to fuse more than two multiband images. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
Figures

Figure 1

Open AccessArticle Multivariate Statistical Approach to Image Quality Tasks
J. Imaging 2018, 4(10), 117; https://doi.org/10.3390/jimaging4100117
Received: 15 September 2018 / Revised: 6 October 2018 / Accepted: 10 October 2018 / Published: 12 October 2018
PDF Full-text (2448 KB) | HTML Full-text | XML Full-text
Abstract
Many existing natural scene statistics-based no reference image quality assessment (NR IQA) algorithms employ univariate parametric distributions to capture the statistical inconsistencies of bandpass distorted image coefficients. Here, we propose a multivariate model of natural image coefficients expressed in the bandpass spatial domain
[...] Read more.
Many existing natural scene statistics-based no reference image quality assessment (NR IQA) algorithms employ univariate parametric distributions to capture the statistical inconsistencies of bandpass distorted image coefficients. Here, we propose a multivariate model of natural image coefficients expressed in the bandpass spatial domain that has the potential to capture higher order correlations that may be induced by the presence of distortions. We analyze how the parameters of the multivariate model are affected by different distortion types, and we show their ability to capture distortion-sensitive image quality information. We also demonstrate the violation of Gaussianity assumptions that occur when locally estimating the energies of distorted image coefficients. Thus, we propose a generalized Gaussian-based local contrast estimator as a way to implement non-linear local gain control, which facilitates the accurate modeling of both pristine and distorted images. We integrate the novel approach of generalized contrast normalization with multivariate modeling of bandpass image coefficients into a holistic NR IQA model, which we refer to as multivariate generalized contrast normalization (MVGCN). We demonstrate the improved performance of MVGCN on quality-relevant tasks on multiple imaging modalities, including visible light image quality prediction and task success prediction on distorted X-ray images. Full article
(This article belongs to the Special Issue Image Quality)
Figures

Figure 1

Open AccessArticle ECRU: An Encoder-Decoder Based Convolution Neural Network (CNN) for Road-Scene Understanding
J. Imaging 2018, 4(10), 116; https://doi.org/10.3390/jimaging4100116
Received: 24 June 2018 / Revised: 21 September 2018 / Accepted: 29 September 2018 / Published: 8 October 2018
PDF Full-text (2763 KB) | HTML Full-text | XML Full-text
Abstract
This research presents the idea of a novel fully-Convolutional Neural Network (CNN)-based model for probabilistic pixel-wise segmentation, titled Encoder-decoder-based CNN for Road-Scene Understanding (ECRU). Lately, scene understanding has become an evolving research area, and semantic segmentation is the most recent method for visual
[...] Read more.
This research presents the idea of a novel fully-Convolutional Neural Network (CNN)-based model for probabilistic pixel-wise segmentation, titled Encoder-decoder-based CNN for Road-Scene Understanding (ECRU). Lately, scene understanding has become an evolving research area, and semantic segmentation is the most recent method for visual recognition. Among vision-based smart systems, the driving assistance system turns out to be a much preferred research topic. The proposed model is an encoder-decoder that performs pixel-wise class predictions. The encoder network is composed of a VGG-19 layer model, while the decoder network uses 16 upsampling and deconvolution units. The encoder of the network has a very flexible architecture that can be altered and trained for any size and resolution of images. The decoder network upsamples and maps the low-resolution encoder’s features. Consequently, there is a substantial reduction in the trainable parameters, as the network recycles the encoder’s pooling indices for pixel-wise classification and segmentation. The proposed model is intended to offer a simplified CNN model with less overhead and higher performance. The network is trained and tested on the famous road scenes dataset CamVid and offers outstanding outcomes in comparison to similar early approaches like FCN and VGG16 in terms of performance vs. trainable parameters. Full article
Figures

Graphical abstract

Open AccessArticle A Non-Structural Representation Scheme for Articulated Shapes
J. Imaging 2018, 4(10), 115; https://doi.org/10.3390/jimaging4100115
Received: 4 September 2018 / Revised: 27 September 2018 / Accepted: 2 October 2018 / Published: 8 October 2018
PDF Full-text (2700 KB) | HTML Full-text | XML Full-text
Abstract
Articulated shapes are successfully represented by structural representations which are organized in the form of graphs of shape components. We present an alternative representation scheme which is equally powerful but does not require explicit modeling or discovery of structural relations. The key element
[...] Read more.
Articulated shapes are successfully represented by structural representations which are organized in the form of graphs of shape components. We present an alternative representation scheme which is equally powerful but does not require explicit modeling or discovery of structural relations. The key element in our scheme is a novel multi scale pixel-based distinctness measure which implicitly quantifies how rare a particular pixel is in terms of its geometry with respect to all pixels of the shape. The spatial distribution of the distinctness yields a partitioning of the shape into a set of regions. The proposed representation is a collection of size normalized probability distribution of the distinctness over regions over shape dependent scales. We test the proposed representation on a clustering task. Full article
Figures

Figure 1

Open AccessArticle On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment
J. Imaging 2018, 4(10), 114; https://doi.org/10.3390/jimaging4100114
Received: 16 July 2018 / Revised: 23 September 2018 / Accepted: 26 September 2018 / Published: 4 October 2018
PDF Full-text (21219 KB) | HTML Full-text | XML Full-text
Abstract
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on
[...] Read more.
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on input images with a specific range of quality. Therefore, an effort has been made to develop image quality assessment (IQA) methods that are able to automatically estimate quality. Among the possible IQA approaches, No-Reference IQA (NR-IQA) methods are of fundamental interest, since they can be used in most real-time multimedia applications. NR-IQA are capable of assessing the quality of an image without using the reference (or pristine) image. In this paper, we investigate the use of texture descriptors in the design of NR-IQA methods. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate quality. To investigate if this premise is valid, we analyze the use of a set of state-of-the-art Local Binary Patterns (LBP) texture descriptors in IQA methods. Particularly, we present a comprehensive review with a detailed description of the considered methods. Additionally, we propose a framework for using texture descriptors in NR-IQA methods. Our experimental results indicate that, although not all texture descriptors are suitable for NR-IQA, many can be used with this purpose achieving a good accuracy performance with the advantage of a low computational complexity. Full article
(This article belongs to the Special Issue Image Quality)
Figures

Figure 1

Open AccessEditorial Phase-Contrast and Dark-Field Imaging
J. Imaging 2018, 4(10), 113; https://doi.org/10.3390/jimaging4100113
Received: 25 September 2018 / Accepted: 25 September 2018 / Published: 2 October 2018
PDF Full-text (177 KB) | HTML Full-text | XML Full-text
Abstract
Very early, in 1896, Wilhelm Conrad Röntgen, the founding father of X-rays, attempted to measure diffraction and refraction by this new kind of radiation, in vain. Only 70 years later, these effects were measured by Ulrich Bonse and Michael Hart who used them
[...] Read more.
Very early, in 1896, Wilhelm Conrad Röntgen, the founding father of X-rays, attempted to measure diffraction and refraction by this new kind of radiation, in vain. Only 70 years later, these effects were measured by Ulrich Bonse and Michael Hart who used them to make full-field images of biological specimen, coining the term phase-contrast imaging. Yet, another 30 years passed until the Talbot effect was rediscovered for X-radiation, giving rise to a micrograting based interferometer, replacing the Bonse–Hart interferometer, which relied on a set of four Laue-crystals for beam splitting and interference. By merging the Lau-interferometer with this Talbot-interferometer, another ten years later, measuring X-ray refraction and X-ray scattering full-field and in cm-sized objects (as Röntgen had attempted 110 years earlier) became feasible in every X-ray laboratory around the world. Today, now that another twelve years have passed and we are approaching the 125th jubilee of Röntgen’s discovery, neither Laue-crystals nor microgratings are a necessity for sensing refraction and scattering by X-rays. Cardboard, steel wool, and sandpaper are sufficient for extracting these contrasts from transmission images, using the latest image reconstruction algorithms. This advancement and the ever rising number of applications for phase-contrast and dark-field imaging prove to what degree our understanding of imaging physics as well as signal processing have advanced since the advent of X-ray physics, in particular during the past two decades. The discovery of the electron, as well as the development of electron imaging technology, has accompanied X-ray physics closely along its path, both modalities exploring the applications of new dark-field contrast mechanisms these days. Materials science, life science, archeology, non-destructive testing, and medicine are the key faculties which have already integrated these new imaging devices, using their contrast mechanisms in full. This special issue “Phase-Contrast and Dark-field Imaging” gives us a broad yet very to-the-point glimpse of research and development which are currently taking place in this very active field. We find reviews, applications reports, and methodological papers of very high quality from various groups, most of which operate X-ray scanners which comprise these new imaging modalities. Full article
(This article belongs to the Special Issue Phase-Contrast and Dark-Field Imaging)
Open AccessArticle Unsupervised Local Binary Pattern Histogram Selection Scores for Color Texture Classification
J. Imaging 2018, 4(10), 112; https://doi.org/10.3390/jimaging4100112
Received: 11 July 2018 / Revised: 7 September 2018 / Accepted: 25 September 2018 / Published: 28 September 2018
PDF Full-text (374 KB) | HTML Full-text | XML Full-text
Abstract
These last few years, several supervised scores have been proposed in the literature to select histograms. Applied to color texture classification problems, these scores have improved the accuracy by selecting the most discriminant histograms among a set of available ones computed from a
[...] Read more.
These last few years, several supervised scores have been proposed in the literature to select histograms. Applied to color texture classification problems, these scores have improved the accuracy by selecting the most discriminant histograms among a set of available ones computed from a color image. In this paper, two new scores are proposed to select histograms: The adapted Variance score and the adapted Laplacian score. These new scores are computed without considering the class label of the images, contrary to what is done until now. Experiments, achieved on OuTex, USPTex, and BarkTex sets, show that these unsupervised scores give as good results as the supervised ones for LBP histogram selection. Full article
(This article belongs to the Special Issue Computational Colour Imaging)
Figures

Figure 1

Open AccessArticle GPU Acceleration of the Most Apparent Distortion Image Quality Assessment Algorithm
J. Imaging 2018, 4(10), 111; https://doi.org/10.3390/jimaging4100111
Received: 1 August 2018 / Revised: 10 September 2018 / Accepted: 19 September 2018 / Published: 25 September 2018
PDF Full-text (527 KB) | HTML Full-text | XML Full-text
Abstract
The primary function of multimedia systems is to seamlessly transform and display content to users while maintaining the perception of acceptable quality. For images and videos, perceptual quality assessment algorithms play an important role in determining what is acceptable quality and what is
[...] Read more.
The primary function of multimedia systems is to seamlessly transform and display content to users while maintaining the perception of acceptable quality. For images and videos, perceptual quality assessment algorithms play an important role in determining what is acceptable quality and what is unacceptable from a human visual perspective. As modern image quality assessment (IQA) algorithms gain widespread adoption, it is important to achieve a balance between their computational efficiency and their quality prediction accuracy. One way to improve computational performance to meet real-time constraints is to use simplistic models of visual perception, but such an approach has a serious drawback in terms of poor-quality predictions and limited robustness to changing distortions and viewing conditions. In this paper, we investigate the advantages and potential bottlenecks of implementing a best-in-class IQA algorithm, Most Apparent Distortion, on graphics processing units (GPUs). Our results suggest that an understanding of the GPU and CPU architectures, combined with detailed knowledge of the IQA algorithm, can lead to non-trivial speedups without compromising prediction accuracy. A single-GPU and a multi-GPU implementation showed a 24× and a 33× speedup, respectively, over the baseline CPU implementation. A bottleneck analysis revealed the kernels with the highest runtimes, and a microarchitectural analysis illustrated the underlying reasons for the high runtimes of these kernels. Programs written with optimizations such as blocking that map well to CPU memory hierarchies do not map well to the GPU’s memory hierarchy. While compute unified device architecture (CUDA) is convenient to use and is powerful in facilitating general purpose GPU (GPGPU) programming, knowledge of how a program interacts with the underlying hardware is essential for understanding performance bottlenecks and resolving them. Full article
(This article belongs to the Special Issue Image Quality)
Figures

Figure 1

Open AccessArticle Hyperspectral Imaging Using Laser Excitation for Fast Raman and Fluorescence Hyperspectral Imaging for Sorting and Quality Control Applications
J. Imaging 2018, 4(10), 110; https://doi.org/10.3390/jimaging4100110
Received: 24 August 2018 / Revised: 14 September 2018 / Accepted: 19 September 2018 / Published: 21 September 2018
PDF Full-text (3041 KB) | HTML Full-text | XML Full-text
Abstract
A hyperspectral measurement system for the fast and large area measurement of Raman and fluorescence signals was developed, characterized and tested. This laser hyperspectral imaging system (Laser-HSI) can be used for sorting tasks and for continuous quality monitoring. The system uses a 532
[...] Read more.
A hyperspectral measurement system for the fast and large area measurement of Raman and fluorescence signals was developed, characterized and tested. This laser hyperspectral imaging system (Laser-HSI) can be used for sorting tasks and for continuous quality monitoring. The system uses a 532 nm Nd:YAG laser and a standard pushbroom HSI camera. Depending on the lens selected, it is possible to cover large areas (e.g., field of view (FOV) = 386 mm) or to achieve high spatial resolutions (e.g., 0.02 mm). The developed Laser-HSI was used for four exemplary experiments: (a) the measurement and classification of a mixture of sulphur and naphthalene; (b) the measurement of carotenoid distribution in a carrot slice; (c) the classification of black polymer particles; and, (d) the localization of impurities on a lead zirconate titanate (PZT) piezoelectric actuator. It could be shown that the measurement data obtained were in good agreement with reference measurements taken with a high-resolution Raman microscope. Furthermore, the suitability of the measurements for classification using machine learning algorithms was also demonstrated. The developed Laser-HSI could be used in the future for complex quality control or sorting tasks where conventional HSI systems fail. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
Figures

Graphical abstract

Back to Top