Next Issue
Previous Issue

Table of Contents

J. Imaging, Volume 4, Issue 2 (February 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) This article provides an overview of neutron imaging at the Los Alamos Neutron Science Center [...] Read more.
View options order results:
result details:
Displaying articles 1-20
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle Studies of Ancient Russian Cultural Objects Using the Neutron Tomography Method
J. Imaging 2018, 4(2), 25; https://doi.org/10.3390/jimaging4020025
Received: 27 October 2017 / Revised: 18 January 2018 / Accepted: 19 January 2018 / Published: 23 January 2018
PDF Full-text (3729 KB) | HTML Full-text | XML Full-text
Abstract
Neutron radiography and tomography is a non-destructive method that provides detailed information about the internal structure of cultural heritage objects. The differences in the neutron attenuation coefficients of constituent elements of the studied objects, as well as the application of modern mathematical algorithms
[...] Read more.
Neutron radiography and tomography is a non-destructive method that provides detailed information about the internal structure of cultural heritage objects. The differences in the neutron attenuation coefficients of constituent elements of the studied objects, as well as the application of modern mathematical algorithms to carry out three-dimensional imaging data analysis, allow one to obtain unique information about the spatial distribution of different phases, the presence of internal defects, or the degree of structural degradation inside valuable cultural objects. The results of the neutron studies of several archaeological objects related to different epochs of the Russian history are reported in order to demonstrate the opportunities provided by the neutron tomography method. The obtained 3D structural volume data, as well as the results of the corresponding data analysis, are presented. Full article
(This article belongs to the Special Issue Neutron Imaging)
Figures

Figure 1

Open AccessArticle A New Binarization Algorithm for Historical Documents
J. Imaging 2018, 4(2), 27; https://doi.org/10.3390/jimaging4020027
Received: 31 October 2017 / Revised: 16 January 2018 / Accepted: 16 January 2018 / Published: 23 January 2018
PDF Full-text (2874 KB) | HTML Full-text | XML Full-text
Abstract
Monochromatic documents claim for much less computer bandwidth for network transmission and storage space than their color or even grayscale equivalent. The binarization of historical documents is far more complex than recent ones as paper aging, color, texture, translucidity, stains, back-to-front interference, kind
[...] Read more.
Monochromatic documents claim for much less computer bandwidth for network transmission and storage space than their color or even grayscale equivalent. The binarization of historical documents is far more complex than recent ones as paper aging, color, texture, translucidity, stains, back-to-front interference, kind and color of ink used in handwriting, printing process, digitalization process, etc. are some of the factors that affect binarization. This article presents a new binarization algorithm for historical documents. The new global filter proposed is performed in four steps: filtering the image using a bilateral filter, splitting image into the RGB components, decision-making for each RGB channel based on an adaptive binarization method inspired by Otsu’s method with a choice of the threshold level, and classification of the binarized images to decide which of the RGB components best preserved the document information in the foreground. The quantitative and qualitative assessment made with 23 binarization algorithms in three sets of “real world” documents showed very good results. Full article
(This article belongs to the Special Issue Document Image Processing)
Figures

Figure 1

Open AccessArticle Exploiting Multiple Detections for Person Re-Identification
J. Imaging 2018, 4(2), 28; https://doi.org/10.3390/jimaging4020028
Received: 18 November 2017 / Revised: 11 January 2018 / Accepted: 11 January 2018 / Published: 23 January 2018
PDF Full-text (1859 KB) | HTML Full-text | XML Full-text
Abstract
Re-identification systems aim at recognizing the same individuals in multiple cameras, and one of the most relevant problems is that the appearance of same individual varies across cameras due to illumination and viewpoint changes. This paper proposes the use of cumulative weighted brightness
[...] Read more.
Re-identification systems aim at recognizing the same individuals in multiple cameras, and one of the most relevant problems is that the appearance of same individual varies across cameras due to illumination and viewpoint changes. This paper proposes the use of cumulative weighted brightness transfer functions (CWBTFs) to model these appearance variations. Different from recently proposed methods which only consider pairs of images to learn a brightness transfer function, we exploit such a multiple-frame-based learning approach that leverages consecutive detections of each individual to transfer the appearance. We first present a CWBTF framework for the task of transforming appearance from one camera to another. We then present a re-identification framework where we segment the pedestrian images into meaningful parts and extract features from such parts, as well as from the whole body. Jointly, both of these frameworks contribute to model the appearance variations more robustly. We tested our approach on standard multi-camera surveillance datasets, showing consistent and significant improvements over existing methods on three different datasets without any other additional cost. Our approach is general and can be applied to any appearance-based method. Full article
Figures

Figure 1

Open AccessArticle Estimating Full Regional Skeletal Muscle Fibre Orientation from B-Mode Ultrasound Images Using Convolutional, Residual, and Deconvolutional Neural Networks
J. Imaging 2018, 4(2), 29; https://doi.org/10.3390/jimaging4020029
Received: 8 November 2017 / Revised: 17 January 2018 / Accepted: 22 January 2018 / Published: 29 January 2018
PDF Full-text (6231 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper presents an investigation into the feasibility of using deep learning methods for developing arbitrary full spatial resolution regression analysis of B-mode ultrasound images of human skeletal muscle. In this study, we focus on full spatial analysis of muscle fibre orientation, since
[...] Read more.
This paper presents an investigation into the feasibility of using deep learning methods for developing arbitrary full spatial resolution regression analysis of B-mode ultrasound images of human skeletal muscle. In this study, we focus on full spatial analysis of muscle fibre orientation, since there is an existing body of work with which to compare results. Previous attempts to automatically estimate fibre orientation from ultrasound are not adequate, often requiring manual region selection, feature engineering, providing low-resolution estimations (one angle per muscle) and deep muscles are often not attempted. We build upon our previous work in which automatic segmentation was used with plain convolutional neural network (CNN) and deep residual convolutional network (ResNet) architectures, to predict a low-resolution map of fibre orientation in extracted muscle regions. Here, we use deconvolutions and max-unpooling (DCNN) to regularise and improve predicted fibre orientation maps for the entire image, including deep muscles, removing the need for automatic segmentation and we compare our results with the CNN and ResNet, as well as a previously established feature engineering method, on the same task. Dynamic ultrasound images sequences of the calf muscles were acquired (25 Hz) from 8 healthy volunteers (4 male, ages: 25–36, median 30). A combination of expert annotation and interpolation/extrapolation provided labels of regional fibre orientation for each image. Neural networks (CNN, ResNet, DCNN) were then trained both with and without dropout using leave one out cross-validation. Our results demonstrated robust estimation of full spatial fibre orientation within approximately 6° error, which was an improvement on previous methods. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Illusion and Illusoriness of Color and Coloration
J. Imaging 2018, 4(2), 30; https://doi.org/10.3390/jimaging4020030
Received: 24 November 2017 / Revised: 27 December 2017 / Accepted: 22 January 2018 / Published: 30 January 2018
PDF Full-text (7293 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In this work, through a phenomenological analysis, we studied the perception of the chromatic illusion and illusoriness. The necessary condition for an illusion to occur is the discovery of a mismatch/disagreement between the geometrical/physical domain and the phenomenal one. The illusoriness is instead
[...] Read more.
In this work, through a phenomenological analysis, we studied the perception of the chromatic illusion and illusoriness. The necessary condition for an illusion to occur is the discovery of a mismatch/disagreement between the geometrical/physical domain and the phenomenal one. The illusoriness is instead a phenomenal attribute related to a sense of strangeness, deception, singularity, mendacity, and oddity. The main purpose of this work is to study the phenomenology of chromatic illusion vs. illusoriness, which is useful for shedding new light on the no-man’s land between “sensory” and “cognitive” processes that have not been fully explored. Some basic psychological and biological implications for living organisms are deduced. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessArticle An Investigation of Smooth TV-Like Regularization in the Context of the Optical Flow Problem
J. Imaging 2018, 4(2), 31; https://doi.org/10.3390/jimaging4020031
Received: 28 November 2017 / Revised: 24 January 2018 / Accepted: 26 January 2018 / Published: 31 January 2018
PDF Full-text (4973 KB) | HTML Full-text | XML Full-text
Abstract
Total variation (TV) is widely used in many image processing problems including the regularization of optical flow estimation. In order to deal with non differentiability of the TV regularization term, smooth approximations have been considered in the literature. In this paper, we investigate
[...] Read more.
Total variation (TV) is widely used in many image processing problems including the regularization of optical flow estimation. In order to deal with non differentiability of the TV regularization term, smooth approximations have been considered in the literature. In this paper, we investigate the use of three known smooth TV approximations, namely: the Charbonnier, Huber and Green functions. We establish the maximum theoretical error of these approximations and discuss their performance evaluation when applied to the optical flow problem. Full article
Figures

Figure 1

Open AccessArticle Open Datasets and Tools for Arabic Text Detection and Recognition in News Video Frames
J. Imaging 2018, 4(2), 32; https://doi.org/10.3390/jimaging4020032
Received: 26 November 2017 / Revised: 23 January 2018 / Accepted: 26 January 2018 / Published: 31 January 2018
PDF Full-text (5686 KB) | HTML Full-text | XML Full-text
Abstract
Recognizing texts in video is more complex than in other environments such as scanned documents. Video texts appear in various colors, unknown fonts and sizes, often affected by compression artifacts and low quality. In contrast to Latin texts, there are no publicly available
[...] Read more.
Recognizing texts in video is more complex than in other environments such as scanned documents. Video texts appear in various colors, unknown fonts and sizes, often affected by compression artifacts and low quality. In contrast to Latin texts, there are no publicly available datasets which cover all aspects of the Arabic Video OCR domain. This paper describes a new well-defined and annotated Arabic-Text-in-Video dataset called AcTiV 2.0. The dataset is dedicated especially to building and evaluating Arabic video text detection and recognition systems. AcTiV 2.0 contains 189 video clips serving as a raw material for creating 4063 key frames for the detection task and 10,415 cropped text images for the recognition task. AcTiV 2.0 is also distributed with its annotation and evaluation tools that are made open-source for standardization and validation purposes. This paper also reports on the evaluation of several systems tested under the proposed detection and recognition protocols. Full article
(This article belongs to the Special Issue Document Image Processing)
Figures

Figure 1

Open AccessArticle Partition and Inclusion Hierarchies of Images: A Comprehensive Survey
J. Imaging 2018, 4(2), 33; https://doi.org/10.3390/jimaging4020033
Received: 3 December 2017 / Revised: 22 January 2018 / Accepted: 25 January 2018 / Published: 1 February 2018
PDF Full-text (810 KB) | HTML Full-text | XML Full-text
Abstract
The theory of hierarchical image representations has been well-established in Mathematical Morphology, and provides a suitable framework to handle images through objects or regions taking into account their scale. Such approaches have increased in popularity and been favourably compared to treating individual image
[...] Read more.
The theory of hierarchical image representations has been well-established in Mathematical Morphology, and provides a suitable framework to handle images through objects or regions taking into account their scale. Such approaches have increased in popularity and been favourably compared to treating individual image elements in various domains and applications. This survey paper presents the development of hierarchical image representations over the last 20 years using the framework of component trees. We introduce two classes of component trees, partitioning and inclusion trees, and describe their general characteristics and differences. Examples of hierarchies for each of the classes are compared, with the resulting study aiming to serve as a guideline when choosing a hierarchical image representation for any application and image domain. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Figures

Figure 1

Open AccessArticle Denoising of X-ray Images Using the Adaptive Algorithm Based on the LPA-RICI Algorithm
J. Imaging 2018, 4(2), 34; https://doi.org/10.3390/jimaging4020034
Received: 5 December 2017 / Revised: 1 February 2018 / Accepted: 2 February 2018 / Published: 5 February 2018
Cited by 2 | PDF Full-text (4665 KB) | HTML Full-text | XML Full-text
Abstract
Diagnostics and treatments of numerous diseases are highly dependent on the quality of captured medical images. However, noise (during both acquisition and transmission) is one of the main factors that reduce their quality. This paper proposes an adaptive image denoising algorithm applied to
[...] Read more.
Diagnostics and treatments of numerous diseases are highly dependent on the quality of captured medical images. However, noise (during both acquisition and transmission) is one of the main factors that reduce their quality. This paper proposes an adaptive image denoising algorithm applied to enhance X-ray images. The algorithm is based on the modification of the intersection of confidence intervals (ICI) rule, called relative intersection of confidence intervals (RICI) rule. For each image pixel apart, a 2D mask of adaptive size and shape is calculated and used in designing the 2D local polynomial approximation (LPA) filters for noise removal. One of the advantages of the proposed method is the fact that the estimation of the noise free pixel is performed independently for each image pixel and thus, the method is applicable for easy parallelization in order to improve its computational efficiency. The proposed method was compared to the Gaussian smoothing filters, total variation denoising and fixed size median filtering and was shown to outperform them both visually and in terms of the peak signal-to-noise ratio (PSNR) by up to 7.99 dB. Full article
Figures

Figure 1

Open AccessArticle Image Features Based on Characteristic Curves and Local Binary Patterns for Automated HER2 Scoring
J. Imaging 2018, 4(2), 35; https://doi.org/10.3390/jimaging4020035
Received: 30 October 2017 / Revised: 1 February 2018 / Accepted: 2 February 2018 / Published: 5 February 2018
PDF Full-text (2995 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents novel feature descriptors and classification algorithms for the automated scoring of HER2 in Whole Slide Images (WSI) of breast cancer histology slides. Since a large amount of processing is involved in analyzing WSI images, the primary design goal has been
[...] Read more.
This paper presents novel feature descriptors and classification algorithms for the automated scoring of HER2 in Whole Slide Images (WSI) of breast cancer histology slides. Since a large amount of processing is involved in analyzing WSI images, the primary design goal has been to keep the computational complexity to the minimum possible level and to use simple, yet robust feature descriptors that can provide accurate classification of the slides. We propose two types of feature descriptors that encode important information about staining patterns and the percentage of staining present in ImmunoHistoChemistry (IHC)-stained slides. The first descriptor is called a characteristic curve, which is a smooth non-increasing curve that represents the variation of percentage of staining with saturation levels. The second new descriptor introduced in this paper is a local binary pattern (LBP) feature curve, which is also a non-increasing smooth curve that represents the local texture of the staining patterns. Both descriptors show excellent interclass variance and intraclass correlation and are suitable for the design of automatic HER2 classification algorithms. This paper gives the detailed theoretical aspects of the feature descriptors and also provides experimental results and a comparative analysis. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle An Overview of Deep Learning Based Methods for Unsupervised and Semi-Supervised Anomaly Detection in Videos
J. Imaging 2018, 4(2), 36; https://doi.org/10.3390/jimaging4020036
Received: 20 November 2017 / Revised: 29 January 2018 / Accepted: 1 February 2018 / Published: 7 February 2018
PDF Full-text (1495 KB) | HTML Full-text | XML Full-text
Abstract
Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly
[...] Read more.
Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Figures

Figure 1

Open AccessArticle Efficient Query Specific DTW Distance for Document Retrieval with Unlimited Vocabulary
J. Imaging 2018, 4(2), 37; https://doi.org/10.3390/jimaging4020037
Received: 31 October 2017 / Revised: 27 January 2018 / Accepted: 2 February 2018 / Published: 8 February 2018
PDF Full-text (414 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we improve the performance of the recently proposed Direct Query Classifier (dqc). The (dqc) is a classifier based retrieval method and in general, such methods have been shown to be superior to the OCR-based solutions for
[...] Read more.
In this paper, we improve the performance of the recently proposed Direct Query Classifier (dqc). The (dqc) is a classifier based retrieval method and in general, such methods have been shown to be superior to the OCR-based solutions for performing retrieval in many practical document image datasets. In (dqc), the classifiers are trained for a set of frequent queries and seamlessly extended for the rare and arbitrary queries. This extends the classifier based retrieval paradigm to an unlimited number of classes (words) present in a language. The (dqc) requires indexing cut-portions (n-grams) of the word image and dtw distance has been used for indexing. However, dtw is computationally slow and therefore limits the performance of the (dqc). We introduce query specific dtw distance, which enables effective computation of global principal alignments for novel queries. Since the proposed query specific dtw distance is a linear approximation of the dtw distance, it enhances the performance of the (dqc). Unlike previous approaches, the proposed query specific dtw distance uses both the class mean vectors and the query information for computing the global principal alignments for the query. Since the proposed method computes the global principal alignments using n-grams, it works well for both frequent and rare queries. We also use query expansion (qe) to further improve the performance of our query specific dtw. This also allows us to seamlessly adapt our solution to new fonts, styles and collections. We have demonstrated the utility of the proposed technique over 3 different datasets. The proposed query specific dtw performs well compared to the previous dtw approximations. Full article
(This article belongs to the Special Issue Document Image Processing)
Figures

Figure 1

Open AccessArticle Digital Image Correlation of Strains at Profiled Wood Surfaces Exposed to Wetting and Drying
J. Imaging 2018, 4(2), 38; https://doi.org/10.3390/jimaging4020038
Received: 31 December 2017 / Revised: 31 January 2018 / Accepted: 6 February 2018 / Published: 10 February 2018
PDF Full-text (3185 KB) | HTML Full-text | XML Full-text
Abstract
We hypothesize that machining grooves and ridges into the surface of radiata pine deck boards will change the pattern of strains that develop when profiled boards are exposed to wetting and drying. Two wavy profiles were tested, and flat unprofiled boards acted as
[...] Read more.
We hypothesize that machining grooves and ridges into the surface of radiata pine deck boards will change the pattern of strains that develop when profiled boards are exposed to wetting and drying. Two wavy profiles were tested, and flat unprofiled boards acted as controls. Full-field surface strain data was collected using digital image correlation. Strains varied across the surface of both flat and profiled boards during wetting and drying. Profiling fundamentally changed surface strain patterns; strain maxima and minima developed in the profile ridges and grooves during wetting, respectively, but this pattern of strains reversed during drying. Such a pronounced reversal of strains was not observed when flat boards were exposed to wetting and drying, although there was a shift towards negative strains when flat boards were dried. We conclude that profiling changes surface strain distribution in deck boards exposed to wetting and drying, and causes high strains to develop in the grooves of profiled boards. These findings help explain why checks in profiled deck boards are mainly confined to profile grooves where they are difficult to see, and the commercial success of profiling at reducing the negative effects of checking on the appearance of wood decking. Full article
Figures

Figure 1

Open AccessArticle A Study of Different Classifier Combination Approaches for Handwritten Indic Script Recognition
J. Imaging 2018, 4(2), 39; https://doi.org/10.3390/jimaging4020039
Received: 15 December 2017 / Revised: 6 February 2018 / Accepted: 8 February 2018 / Published: 13 February 2018
PDF Full-text (2203 KB) | HTML Full-text | XML Full-text
Abstract
Script identification is an essential step in document image processing especially when the environment is multi-script/multilingual. Till date researchers have developed several methods for the said problem. For this kind of complex pattern recognition problem, it is always difficult to decide which classifier
[...] Read more.
Script identification is an essential step in document image processing especially when the environment is multi-script/multilingual. Till date researchers have developed several methods for the said problem. For this kind of complex pattern recognition problem, it is always difficult to decide which classifier would be the best choice. Moreover, it is also true that different classifiers offer complementary information about the patterns to be classified. Therefore, combining classifiers, in an intelligent way, can be beneficial compared to using any single classifier. Keeping these facts in mind, in this paper, information provided by one shape based and two texture based features are combined using classifier combination techniques for script recognition (word-level) purpose from the handwritten document images. CMATERdb8.4.1 contains 7200 handwritten word samples belonging to 12 Indic scripts (600 per script) and the database is made freely available at https://code.google.com/p/cmaterdb/. The word samples from the mentioned database are classified based on the confidence scores provided by Multi-Layer Perceptron (MLP) classifier. Major classifier combination techniques including majority voting, Borda count, sum rule, product rule, max rule, Dempster-Shafer (DS) rule of combination and secondary classifiers are evaluated for this pattern recognition problem. Maximum accuracy of 98.45% is achieved with an improvement of 7% over the best performing individual classifier being reported on the validation set. Full article
(This article belongs to the Special Issue Document Image Processing)
Figures

Figure 1

Open AccessArticle Event Centroiding Applied to Energy-Resolved Neutron Imaging at LANSCE
J. Imaging 2018, 4(2), 40; https://doi.org/10.3390/jimaging4020040
Received: 6 December 2017 / Revised: 9 February 2018 / Accepted: 11 February 2018 / Published: 13 February 2018
PDF Full-text (11185 KB) | HTML Full-text | XML Full-text
Abstract
The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well
[...] Read more.
The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as for quantitative density measurements, was pioneered at the Flight Path 5 beam line at LANSCE and continues to be refined. Here we present event centroiding, i.e., the determination of the center-of-gravity of a detection event on an imaging detector to allow sub-pixel spatial resolution and apply it to the many frames collected for energy-resolved neutron imaging at a pulsed neutron source. While event centroiding was demonstrated at thermal neutron sources, it has not been applied to energy-resolved neutron imaging, where the energy resolution requires to be preserved, and we present a quantification of the possible resolution as a function of neutron energy. For the 55 μm pixel size of the detector used for this study, we found a resolution improvement from ~80 μm to ~22 μm using pixel centroiding while fully preserving the energy resolution. Full article
(This article belongs to the Special Issue Neutron Imaging)
Figures

Figure 1

Open AccessArticle Handwritten Devanagari Character Recognition Using Layer-Wise Training of Deep Convolutional Neural Networks and Adaptive Gradient Methods
J. Imaging 2018, 4(2), 41; https://doi.org/10.3390/jimaging4020041
Received: 6 December 2017 / Revised: 9 February 2018 / Accepted: 12 February 2018 / Published: 13 February 2018
PDF Full-text (1171 KB) | HTML Full-text | XML Full-text
Abstract
Handwritten character recognition is currently getting the attention of researchers because of possible applications in assisting technology for blind and visually impaired users, human–robot interaction, automatic data entry for business documents, etc. In this work, we propose a technique to recognize handwritten Devanagari
[...] Read more.
Handwritten character recognition is currently getting the attention of researchers because of possible applications in assisting technology for blind and visually impaired users, human–robot interaction, automatic data entry for business documents, etc. In this work, we propose a technique to recognize handwritten Devanagari characters using deep convolutional neural networks (DCNN) which are one of the recent techniques adopted from the deep learning community. We experimented the ISIDCHAR database provided by (Information Sharing Index) ISI, Kolkata and V2DMDCHAR database with six different architectures of DCNN to evaluate the performance and also investigate the use of six recently developed adaptive gradient methods. A layer-wise technique of DCNN has been employed that helped to achieve the highest recognition accuracy and also get a faster convergence rate. The results of layer-wise-trained DCNN are favorable in comparison with those achieved by a shallow technique of handcrafted features and standard DCNN. Full article
(This article belongs to the Special Issue Document Image Processing)
Figures

Figure 1

Open AccessArticle Analytical Study of Colour Spaces for Plant Pixel Detection
J. Imaging 2018, 4(2), 42; https://doi.org/10.3390/jimaging4020042
Received: 26 September 2017 / Revised: 12 February 2018 / Accepted: 12 February 2018 / Published: 16 February 2018
PDF Full-text (34457 KB) | HTML Full-text | XML Full-text
Abstract
Segmentation of regions of interest is an important pre-processing step in many colour image analysis procedures. Similarly, segmentation of plant objects in digital images is an important preprocessing step for effective phenotyping by image analysis. In this paper, we present results of a
[...] Read more.
Segmentation of regions of interest is an important pre-processing step in many colour image analysis procedures. Similarly, segmentation of plant objects in digital images is an important preprocessing step for effective phenotyping by image analysis. In this paper, we present results of a statistical analysis to establish the respective abilities of different colour space representations to detect plant pixels and separate them from background pixels. Our hypothesis is that the colour space representation for which the separation of the distributions representing object and background pixels is maximized is the best for the detection of plant pixels. The two pixel classes are modelled by Gaussian Mixture Models (GMMs). In our statistical modelling we make no prior assumptions on the number of Gaussians employed. Instead, a constant bandwidth mean-shift filter is used to cluster the data with the number of clusters, and hence the number of Gaussians, being automatically determined. We have analysed the following representative colour spaces: R G B , r g b , H S V , Y c b c r and C I E - L a b . We have analysed the colour space features from a two-class variance ratio perspective and compared the results of our model with this metric. The dataset for our empirical study consisted of 378 digital images (and their manual segmentations) of a variety of plant species: Arabidopsis, tobacco, wheat, and rye grass, imaged under different lighting conditions, in either indoor or outdoor environments, and with either controlled or uncontrolled backgrounds. We have found that the best segmentation of plants is found using H S V colour space. This is supported by measures of Earth Mover Distance (EMD) of the GMM distributions of plant and background pixels. Full article
(This article belongs to the Special Issue Color Image Processing) Printed Edition available
Figures

Figure 1

Open AccessArticle Benchmarking of Document Image Analysis Tasks for Palm Leaf Manuscripts from Southeast Asia
J. Imaging 2018, 4(2), 43; https://doi.org/10.3390/jimaging4020043
Received: 15 December 2017 / Revised: 10 February 2018 / Accepted: 18 February 2018 / Published: 22 February 2018
PDF Full-text (7742 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a comprehensive test of the principal tasks in document image analysis (DIA), starting with binarization, text line segmentation, and isolated character/glyph recognition, and continuing on to word recognition and transliteration for a new and challenging collection of palm leaf manuscripts
[...] Read more.
This paper presents a comprehensive test of the principal tasks in document image analysis (DIA), starting with binarization, text line segmentation, and isolated character/glyph recognition, and continuing on to word recognition and transliteration for a new and challenging collection of palm leaf manuscripts from Southeast Asia. This research presents and is performed on a complete dataset collection of Southeast Asian palm leaf manuscripts. It contains three different scripts: Khmer script from Cambodia, and Balinese script and Sundanese script from Indonesia. The binarization task is evaluated on many methods up to the latest in some binarization competitions. The seam carving method is evaluated for the text line segmentation task, compared to a recently new text line segmentation method for palm leaf manuscripts. For the isolated character/glyph recognition task, the evaluation is reported from the handcrafted feature extraction method, the neural network with unsupervised learning feature, and the Convolutional Neural Network (CNN) based method. Finally, the Recurrent Neural Network-Long Short-Term Memory (RNN-LSTM) based method is used to analyze the word recognition and transliteration task for the palm leaf manuscripts. The results from all experiments provide the latest findings and a quantitative benchmark for palm leaf manuscripts analysis for researchers in the DIA community. Full article
(This article belongs to the Special Issue Document Image Processing)
Figures

Figure 1

Open AccessArticle Neutron Imaging at LANSCE—From Cold to Ultrafast
J. Imaging 2018, 4(2), 45; https://doi.org/10.3390/jimaging4020045
Received: 5 December 2017 / Revised: 9 February 2018 / Accepted: 9 February 2018 / Published: 23 February 2018
PDF Full-text (17080 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, neutron radiography and tomography have been applied at different beam lines at Los Alamos Neutron Science Center (LANSCE), covering a very wide neutron energy range. The field of energy-resolved neutron imaging with epi-thermal neutrons, utilizing neutron absorption resonances for contrast
[...] Read more.
In recent years, neutron radiography and tomography have been applied at different beam lines at Los Alamos Neutron Science Center (LANSCE), covering a very wide neutron energy range. The field of energy-resolved neutron imaging with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as quantitative density measurements, was pioneered at the Target 1 (Lujan center), Flight Path 5 beam line and continues to be refined. Applications include: imaging of metallic and ceramic nuclear fuels, fission gas measurements, tomography of fossils and studies of dopants in scintillators. The technique provides the ability to characterize materials opaque to thermal neutrons and to utilize neutron resonance analysis codes to quantify isotopes to within 0.1 atom %. The latter also allows measuring fuel enrichment levels or the pressure of fission gas remotely. More recently, the cold neutron spectrum at the ASTERIX beam line, also located at Target 1, was used to demonstrate phase contrast imaging with pulsed neutrons. This extends the capabilities for imaging of thin and transparent materials at LANSCE. In contrast, high-energy neutron imaging at LANSCE, using unmoderated fast spallation neutrons from Target 4 [Weapons Neutron Research (WNR) facility] has been developed for applications in imaging of dense, thick objects. Using fast (ns), time-of-flight imaging, enables testing and developing imaging at specific, selected MeV neutron energies. The 4FP-60R beam line has been reconfigured with increased shielding and new, larger collimation dedicated to fast neutron imaging. The exploration of ways in which pulsed neutron beams and the time-of-flight method can provide additional benefits is continuing. We will describe the facilities and instruments, present application examples and recent results of all these efforts at LANSCE. Full article
(This article belongs to the Special Issue Neutron Imaging)
Figures

Figure 1

Other

Jump to: Research

Open AccessShort Note HF_IDS_Cam: Fast Video Capture with ImageJ for Real-Time Analysis
J. Imaging 2018, 4(2), 44; https://doi.org/10.3390/jimaging4020044
Received: 18 December 2017 / Revised: 16 February 2018 / Accepted: 21 February 2018 / Published: 23 February 2018
PDF Full-text (605 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Fast online video analysis is currently a key issue for dynamic studies in biology; however, very few tools are available for these concerns. Here we present an ImageJ plug-in: HF_IDS_Cam, which allows for video capture at very high speeds using IDS (Imaging Development
[...] Read more.
Fast online video analysis is currently a key issue for dynamic studies in biology; however, very few tools are available for these concerns. Here we present an ImageJ plug-in: HF_IDS_Cam, which allows for video capture at very high speeds using IDS (Imaging Development Systems GmbH) cameras and image analysis software ImageJ. The software has been optimized for real time video analysis with ImageJ native function and other plug-ins and scripts. The plug-in was written in Java and requires ImageJ 1.47v or higher. HF_IDS_Cam offers a wide range of applications for exploration of dynamic phenomena in biology, from in vitro/ex vivo studies, such as fast fluorescent calcium imaging and voltage optical mapping in cardiac myocytes and neurons, to in-vivo behavioral studies. Full article
Figures

Figure 1

Back to Top