Next Issue
Volume 4, March
Previous Issue
Volume 4, January
 
 

J. Imaging, Volume 4, Issue 2 (February 2018) – 20 articles

Cover Story (view full-size image): This article provides an overview of neutron imaging at the Los Alamos Neutron Science Center (LANSCE). Using new instruments that exploit LANSCE’ very broad range of neutron energies, and the neutron-energy-selection provided by short-pulsed spallation neutron sources, we have made advances in elemental and isotopic imaging using nuclear resonances, and in high-energy neutron imaging of dense thick objects. Examples of neutron imaging with thermal and resonance neutrons, high-energy neutrons and cold neutrons ranging from nuclear fuel pellets to fossils and living plants are given. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
26 pages, 17080 KiB  
Article
Neutron Imaging at LANSCE—From Cold to Ultrafast
by Ronald O. Nelson, Sven C. Vogel, James F. Hunter, Erik B. Watkins, Adrian S. Losko, Anton S. Tremsin, Nicholas P. Borges, Theresa E. Cutler, Lee T. Dickman, Michelle A. Espy, Donald Cort Gautier, Amanda C. Madden, Jaroslaw Majewski, Michael W. Malone, Douglas R. Mayo, Kenneth J. McClellan, David S. Montgomery, Shea M. Mosby, Andrew T. Nelson, Kyle J. Ramos, Richard C. Schirato, Katlin Schroeder, Sanna A. Sevanto, Alicia L. Swift, Long K. Vo, Thomas E. Williamson and Nicola M. Winchadd Show full author list remove Hide full author list
J. Imaging 2018, 4(2), 45; https://doi.org/10.3390/jimaging4020045 - 23 Feb 2018
Cited by 31 | Viewed by 12530
Abstract
In recent years, neutron radiography and tomography have been applied at different beam lines at Los Alamos Neutron Science Center (LANSCE), covering a very wide neutron energy range. The field of energy-resolved neutron imaging with epi-thermal neutrons, utilizing neutron absorption resonances for contrast [...] Read more.
In recent years, neutron radiography and tomography have been applied at different beam lines at Los Alamos Neutron Science Center (LANSCE), covering a very wide neutron energy range. The field of energy-resolved neutron imaging with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as quantitative density measurements, was pioneered at the Target 1 (Lujan center), Flight Path 5 beam line and continues to be refined. Applications include: imaging of metallic and ceramic nuclear fuels, fission gas measurements, tomography of fossils and studies of dopants in scintillators. The technique provides the ability to characterize materials opaque to thermal neutrons and to utilize neutron resonance analysis codes to quantify isotopes to within 0.1 atom %. The latter also allows measuring fuel enrichment levels or the pressure of fission gas remotely. More recently, the cold neutron spectrum at the ASTERIX beam line, also located at Target 1, was used to demonstrate phase contrast imaging with pulsed neutrons. This extends the capabilities for imaging of thin and transparent materials at LANSCE. In contrast, high-energy neutron imaging at LANSCE, using unmoderated fast spallation neutrons from Target 4 [Weapons Neutron Research (WNR) facility] has been developed for applications in imaging of dense, thick objects. Using fast (ns), time-of-flight imaging, enables testing and developing imaging at specific, selected MeV neutron energies. The 4FP-60R beam line has been reconfigured with increased shielding and new, larger collimation dedicated to fast neutron imaging. The exploration of ways in which pulsed neutron beams and the time-of-flight method can provide additional benefits is continuing. We will describe the facilities and instruments, present application examples and recent results of all these efforts at LANSCE. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

5 pages, 605 KiB  
Short Note
HF_IDS_Cam: Fast Video Capture with ImageJ for Real-Time Analysis
by Côme Pasqualin, François Gannier, Pierre Bredeloux and Véronique Maupoil
J. Imaging 2018, 4(2), 44; https://doi.org/10.3390/jimaging4020044 - 23 Feb 2018
Cited by 4 | Viewed by 5419
Abstract
Fast online video analysis is currently a key issue for dynamic studies in biology; however, very few tools are available for these concerns. Here we present an ImageJ plug-in: HF_IDS_Cam, which allows for video capture at very high speeds using IDS (Imaging Development [...] Read more.
Fast online video analysis is currently a key issue for dynamic studies in biology; however, very few tools are available for these concerns. Here we present an ImageJ plug-in: HF_IDS_Cam, which allows for video capture at very high speeds using IDS (Imaging Development Systems GmbH) cameras and image analysis software ImageJ. The software has been optimized for real time video analysis with ImageJ native function and other plug-ins and scripts. The plug-in was written in Java and requires ImageJ 1.47v or higher. HF_IDS_Cam offers a wide range of applications for exploration of dynamic phenomena in biology, from in vitro/ex vivo studies, such as fast fluorescent calcium imaging and voltage optical mapping in cardiac myocytes and neurons, to in-vivo behavioral studies. Full article
Show Figures

Figure 1

27 pages, 7742 KiB  
Article
Benchmarking of Document Image Analysis Tasks for Palm Leaf Manuscripts from Southeast Asia
by Made Windu Antara Kesiman, Dona Valy, Jean-Christophe Burie, Erick Paulus, Mira Suryani, Setiawan Hadi, Michel Verleysen, Sophea Chhun and Jean-Marc Ogier
J. Imaging 2018, 4(2), 43; https://doi.org/10.3390/jimaging4020043 - 22 Feb 2018
Cited by 32 | Viewed by 8484
Abstract
This paper presents a comprehensive test of the principal tasks in document image analysis (DIA), starting with binarization, text line segmentation, and isolated character/glyph recognition, and continuing on to word recognition and transliteration for a new and challenging collection of palm leaf manuscripts [...] Read more.
This paper presents a comprehensive test of the principal tasks in document image analysis (DIA), starting with binarization, text line segmentation, and isolated character/glyph recognition, and continuing on to word recognition and transliteration for a new and challenging collection of palm leaf manuscripts from Southeast Asia. This research presents and is performed on a complete dataset collection of Southeast Asian palm leaf manuscripts. It contains three different scripts: Khmer script from Cambodia, and Balinese script and Sundanese script from Indonesia. The binarization task is evaluated on many methods up to the latest in some binarization competitions. The seam carving method is evaluated for the text line segmentation task, compared to a recently new text line segmentation method for palm leaf manuscripts. For the isolated character/glyph recognition task, the evaluation is reported from the handcrafted feature extraction method, the neural network with unsupervised learning feature, and the Convolutional Neural Network (CNN) based method. Finally, the Recurrent Neural Network-Long Short-Term Memory (RNN-LSTM) based method is used to analyze the word recognition and transliteration task for the palm leaf manuscripts. The results from all experiments provide the latest findings and a quantitative benchmark for palm leaf manuscripts analysis for researchers in the DIA community. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

12 pages, 34457 KiB  
Article
Analytical Study of Colour Spaces for Plant Pixel Detection
by Pankaj Kumar and Stanley J. Miklavcic
J. Imaging 2018, 4(2), 42; https://doi.org/10.3390/jimaging4020042 - 16 Feb 2018
Cited by 7 | Viewed by 5145
Abstract
Segmentation of regions of interest is an important pre-processing step in many colour image analysis procedures. Similarly, segmentation of plant objects in digital images is an important preprocessing step for effective phenotyping by image analysis. In this paper, we present results of a [...] Read more.
Segmentation of regions of interest is an important pre-processing step in many colour image analysis procedures. Similarly, segmentation of plant objects in digital images is an important preprocessing step for effective phenotyping by image analysis. In this paper, we present results of a statistical analysis to establish the respective abilities of different colour space representations to detect plant pixels and separate them from background pixels. Our hypothesis is that the colour space representation for which the separation of the distributions representing object and background pixels is maximized is the best for the detection of plant pixels. The two pixel classes are modelled by Gaussian Mixture Models (GMMs). In our statistical modelling we make no prior assumptions on the number of Gaussians employed. Instead, a constant bandwidth mean-shift filter is used to cluster the data with the number of clusters, and hence the number of Gaussians, being automatically determined. We have analysed the following representative colour spaces: R G B , r g b , H S V , Y c b c r and C I E - L a b . We have analysed the colour space features from a two-class variance ratio perspective and compared the results of our model with this metric. The dataset for our empirical study consisted of 378 digital images (and their manual segmentations) of a variety of plant species: Arabidopsis, tobacco, wheat, and rye grass, imaged under different lighting conditions, in either indoor or outdoor environments, and with either controlled or uncontrolled backgrounds. We have found that the best segmentation of plants is found using H S V colour space. This is supported by measures of Earth Mover Distance (EMD) of the GMM distributions of plant and background pixels. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

14 pages, 1171 KiB  
Article
Handwritten Devanagari Character Recognition Using Layer-Wise Training of Deep Convolutional Neural Networks and Adaptive Gradient Methods
by Mahesh Jangid and Sumit Srivastava
J. Imaging 2018, 4(2), 41; https://doi.org/10.3390/jimaging4020041 - 13 Feb 2018
Cited by 69 | Viewed by 9107
Abstract
Handwritten character recognition is currently getting the attention of researchers because of possible applications in assisting technology for blind and visually impaired users, human–robot interaction, automatic data entry for business documents, etc. In this work, we propose a technique to recognize handwritten Devanagari [...] Read more.
Handwritten character recognition is currently getting the attention of researchers because of possible applications in assisting technology for blind and visually impaired users, human–robot interaction, automatic data entry for business documents, etc. In this work, we propose a technique to recognize handwritten Devanagari characters using deep convolutional neural networks (DCNN) which are one of the recent techniques adopted from the deep learning community. We experimented the ISIDCHAR database provided by (Information Sharing Index) ISI, Kolkata and V2DMDCHAR database with six different architectures of DCNN to evaluate the performance and also investigate the use of six recently developed adaptive gradient methods. A layer-wise technique of DCNN has been employed that helped to achieve the highest recognition accuracy and also get a faster convergence rate. The results of layer-wise-trained DCNN are favorable in comparison with those achieved by a shallow technique of handcrafted features and standard DCNN. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

13 pages, 11185 KiB  
Article
Event Centroiding Applied to Energy-Resolved Neutron Imaging at LANSCE
by Nicholas P. Borges, Adrian S. Losko and Sven C. Vogel
J. Imaging 2018, 4(2), 40; https://doi.org/10.3390/jimaging4020040 - 13 Feb 2018
Cited by 10 | Viewed by 5324
Abstract
The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well [...] Read more.
The energy-dependence of the neutron cross section provides vastly different contrast mechanisms than polychromatic neutron radiography if neutron energies can be selected for imaging applications. In recent years, energy-resolved neutron imaging (ERNI) with epi-thermal neutrons, utilizing neutron absorption resonances for contrast as well as for quantitative density measurements, was pioneered at the Flight Path 5 beam line at LANSCE and continues to be refined. Here we present event centroiding, i.e., the determination of the center-of-gravity of a detection event on an imaging detector to allow sub-pixel spatial resolution and apply it to the many frames collected for energy-resolved neutron imaging at a pulsed neutron source. While event centroiding was demonstrated at thermal neutron sources, it has not been applied to energy-resolved neutron imaging, where the energy resolution requires to be preserved, and we present a quantification of the possible resolution as a function of neutron energy. For the 55 μm pixel size of the detector used for this study, we found a resolution improvement from ~80 μm to ~22 μm using pixel centroiding while fully preserving the energy resolution. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

21 pages, 2203 KiB  
Article
A Study of Different Classifier Combination Approaches for Handwritten Indic Script Recognition
by Anirban Mukhopadhyay, Pawan Kumar Singh, Ram Sarkar and Mita Nasipuri
J. Imaging 2018, 4(2), 39; https://doi.org/10.3390/jimaging4020039 - 13 Feb 2018
Cited by 13 | Viewed by 6067
Abstract
Script identification is an essential step in document image processing especially when the environment is multi-script/multilingual. Till date researchers have developed several methods for the said problem. For this kind of complex pattern recognition problem, it is always difficult to decide which classifier [...] Read more.
Script identification is an essential step in document image processing especially when the environment is multi-script/multilingual. Till date researchers have developed several methods for the said problem. For this kind of complex pattern recognition problem, it is always difficult to decide which classifier would be the best choice. Moreover, it is also true that different classifiers offer complementary information about the patterns to be classified. Therefore, combining classifiers, in an intelligent way, can be beneficial compared to using any single classifier. Keeping these facts in mind, in this paper, information provided by one shape based and two texture based features are combined using classifier combination techniques for script recognition (word-level) purpose from the handwritten document images. CMATERdb8.4.1 contains 7200 handwritten word samples belonging to 12 Indic scripts (600 per script) and the database is made freely available at https://code.google.com/p/cmaterdb/. The word samples from the mentioned database are classified based on the confidence scores provided by Multi-Layer Perceptron (MLP) classifier. Major classifier combination techniques including majority voting, Borda count, sum rule, product rule, max rule, Dempster-Shafer (DS) rule of combination and secondary classifiers are evaluated for this pattern recognition problem. Maximum accuracy of 98.45% is achieved with an improvement of 7% over the best performing individual classifier being reported on the validation set. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

12 pages, 3185 KiB  
Article
Digital Image Correlation of Strains at Profiled Wood Surfaces Exposed to Wetting and Drying
by Julian Mallet, Shankar Kalyanasundaram and Philip D. Evans
J. Imaging 2018, 4(2), 38; https://doi.org/10.3390/jimaging4020038 - 10 Feb 2018
Cited by 11 | Viewed by 4955
Abstract
We hypothesize that machining grooves and ridges into the surface of radiata pine deck boards will change the pattern of strains that develop when profiled boards are exposed to wetting and drying. Two wavy profiles were tested, and flat unprofiled boards acted as [...] Read more.
We hypothesize that machining grooves and ridges into the surface of radiata pine deck boards will change the pattern of strains that develop when profiled boards are exposed to wetting and drying. Two wavy profiles were tested, and flat unprofiled boards acted as controls. Full-field surface strain data was collected using digital image correlation. Strains varied across the surface of both flat and profiled boards during wetting and drying. Profiling fundamentally changed surface strain patterns; strain maxima and minima developed in the profile ridges and grooves during wetting, respectively, but this pattern of strains reversed during drying. Such a pronounced reversal of strains was not observed when flat boards were exposed to wetting and drying, although there was a shift towards negative strains when flat boards were dried. We conclude that profiling changes surface strain distribution in deck boards exposed to wetting and drying, and causes high strains to develop in the grooves of profiled boards. These findings help explain why checks in profiled deck boards are mainly confined to profile grooves where they are difficult to see, and the commercial success of profiling at reducing the negative effects of checking on the appearance of wood decking. Full article
Show Figures

Figure 1

16 pages, 414 KiB  
Article
Efficient Query Specific DTW Distance for Document Retrieval with Unlimited Vocabulary
by Gattigorla Nagendar, Viresh Ranjan, Gaurav Harit and C. V. Jawahar
J. Imaging 2018, 4(2), 37; https://doi.org/10.3390/jimaging4020037 - 08 Feb 2018
Cited by 1 | Viewed by 4474
Abstract
In this paper, we improve the performance of the recently proposed Direct Query Classifier (dqc). The (dqc) is a classifier based retrieval method and in general, such methods have been shown to be superior to the OCR-based solutions for [...] Read more.
In this paper, we improve the performance of the recently proposed Direct Query Classifier (dqc). The (dqc) is a classifier based retrieval method and in general, such methods have been shown to be superior to the OCR-based solutions for performing retrieval in many practical document image datasets. In (dqc), the classifiers are trained for a set of frequent queries and seamlessly extended for the rare and arbitrary queries. This extends the classifier based retrieval paradigm to an unlimited number of classes (words) present in a language. The (dqc) requires indexing cut-portions (n-grams) of the word image and dtw distance has been used for indexing. However, dtw is computationally slow and therefore limits the performance of the (dqc). We introduce query specific dtw distance, which enables effective computation of global principal alignments for novel queries. Since the proposed query specific dtw distance is a linear approximation of the dtw distance, it enhances the performance of the (dqc). Unlike previous approaches, the proposed query specific dtw distance uses both the class mean vectors and the query information for computing the global principal alignments for the query. Since the proposed method computes the global principal alignments using n-grams, it works well for both frequent and rare queries. We also use query expansion (qe) to further improve the performance of our query specific dtw. This also allows us to seamlessly adapt our solution to new fonts, styles and collections. We have demonstrated the utility of the proposed technique over 3 different datasets. The proposed query specific dtw performs well compared to the previous dtw approximations. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

25 pages, 1495 KiB  
Article
An Overview of Deep Learning Based Methods for Unsupervised and Semi-Supervised Anomaly Detection in Videos
by B. Ravi Kiran, Dilip Mathew Thomas and Ranjith Parakkal
J. Imaging 2018, 4(2), 36; https://doi.org/10.3390/jimaging4020036 - 07 Feb 2018
Cited by 325 | Viewed by 23124
Abstract
Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly [...] Read more.
Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

13 pages, 2995 KiB  
Article
Image Features Based on Characteristic Curves and Local Binary Patterns for Automated HER2 Scoring
by Ramakrishnan Mukundan
J. Imaging 2018, 4(2), 35; https://doi.org/10.3390/jimaging4020035 - 05 Feb 2018
Cited by 15 | Viewed by 4803
Abstract
This paper presents novel feature descriptors and classification algorithms for the automated scoring of HER2 in Whole Slide Images (WSI) of breast cancer histology slides. Since a large amount of processing is involved in analyzing WSI images, the primary design goal has been [...] Read more.
This paper presents novel feature descriptors and classification algorithms for the automated scoring of HER2 in Whole Slide Images (WSI) of breast cancer histology slides. Since a large amount of processing is involved in analyzing WSI images, the primary design goal has been to keep the computational complexity to the minimum possible level and to use simple, yet robust feature descriptors that can provide accurate classification of the slides. We propose two types of feature descriptors that encode important information about staining patterns and the percentage of staining present in ImmunoHistoChemistry (IHC)-stained slides. The first descriptor is called a characteristic curve, which is a smooth non-increasing curve that represents the variation of percentage of staining with saturation levels. The second new descriptor introduced in this paper is a local binary pattern (LBP) feature curve, which is also a non-increasing smooth curve that represents the local texture of the staining patterns. Both descriptors show excellent interclass variance and intraclass correlation and are suitable for the design of automatic HER2 classification algorithms. This paper gives the detailed theoretical aspects of the feature descriptors and also provides experimental results and a comparative analysis. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

15 pages, 4665 KiB  
Article
Denoising of X-ray Images Using the Adaptive Algorithm Based on the LPA-RICI Algorithm
by Ivica Mandić, Hajdi Peić, Jonatan Lerga and Ivan Štajduhar
J. Imaging 2018, 4(2), 34; https://doi.org/10.3390/jimaging4020034 - 05 Feb 2018
Cited by 21 | Viewed by 8202
Abstract
Diagnostics and treatments of numerous diseases are highly dependent on the quality of captured medical images. However, noise (during both acquisition and transmission) is one of the main factors that reduce their quality. This paper proposes an adaptive image denoising algorithm applied to [...] Read more.
Diagnostics and treatments of numerous diseases are highly dependent on the quality of captured medical images. However, noise (during both acquisition and transmission) is one of the main factors that reduce their quality. This paper proposes an adaptive image denoising algorithm applied to enhance X-ray images. The algorithm is based on the modification of the intersection of confidence intervals (ICI) rule, called relative intersection of confidence intervals (RICI) rule. For each image pixel apart, a 2D mask of adaptive size and shape is calculated and used in designing the 2D local polynomial approximation (LPA) filters for noise removal. One of the advantages of the proposed method is the fact that the estimation of the noise free pixel is performed independently for each image pixel and thus, the method is applicable for easy parallelization in order to improve its computational efficiency. The proposed method was compared to the Gaussian smoothing filters, total variation denoising and fixed size median filtering and was shown to outperform them both visually and in terms of the peak signal-to-noise ratio (PSNR) by up to 7.99 dB. Full article
Show Figures

Figure 1

31 pages, 810 KiB  
Article
Partition and Inclusion Hierarchies of Images: A Comprehensive Survey
by Petra Bosilj, Ewa Kijak and Sébastien Lefèvre
J. Imaging 2018, 4(2), 33; https://doi.org/10.3390/jimaging4020033 - 01 Feb 2018
Cited by 28 | Viewed by 7522
Abstract
The theory of hierarchical image representations has been well-established in Mathematical Morphology, and provides a suitable framework to handle images through objects or regions taking into account their scale. Such approaches have increased in popularity and been favourably compared to treating individual image [...] Read more.
The theory of hierarchical image representations has been well-established in Mathematical Morphology, and provides a suitable framework to handle images through objects or regions taking into account their scale. Such approaches have increased in popularity and been favourably compared to treating individual image elements in various domains and applications. This survey paper presents the development of hierarchical image representations over the last 20 years using the framework of component trees. We introduce two classes of component trees, partitioning and inclusion trees, and describe their general characteristics and differences. Examples of hierarchies for each of the classes are compared, with the resulting study aiming to serve as a guideline when choosing a hierarchical image representation for any application and image domain. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

19 pages, 5686 KiB  
Article
Open Datasets and Tools for Arabic Text Detection and Recognition in News Video Frames
by Oussama Zayene, Sameh Masmoudi Touj, Jean Hennebert, Rolf Ingold and Najoua Essoukri Ben Amara
J. Imaging 2018, 4(2), 32; https://doi.org/10.3390/jimaging4020032 - 31 Jan 2018
Cited by 10 | Viewed by 8724
Abstract
Recognizing texts in video is more complex than in other environments such as scanned documents. Video texts appear in various colors, unknown fonts and sizes, often affected by compression artifacts and low quality. In contrast to Latin texts, there are no publicly available [...] Read more.
Recognizing texts in video is more complex than in other environments such as scanned documents. Video texts appear in various colors, unknown fonts and sizes, often affected by compression artifacts and low quality. In contrast to Latin texts, there are no publicly available datasets which cover all aspects of the Arabic Video OCR domain. This paper describes a new well-defined and annotated Arabic-Text-in-Video dataset called AcTiV 2.0. The dataset is dedicated especially to building and evaluating Arabic video text detection and recognition systems. AcTiV 2.0 contains 189 video clips serving as a raw material for creating 4063 key frames for the detection task and 10,415 cropped text images for the recognition task. AcTiV 2.0 is also distributed with its annotation and evaluation tools that are made open-source for standardization and validation purposes. This paper also reports on the evaluation of several systems tested under the proposed detection and recognition protocols. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

12 pages, 4973 KiB  
Article
An Investigation of Smooth TV-Like Regularization in the Context of the Optical Flow Problem
by El Mostafa Kalmoun
J. Imaging 2018, 4(2), 31; https://doi.org/10.3390/jimaging4020031 - 31 Jan 2018
Cited by 8 | Viewed by 4334
Abstract
Total variation (TV) is widely used in many image processing problems including the regularization of optical flow estimation. In order to deal with non differentiability of the TV regularization term, smooth approximations have been considered in the literature. In this paper, we investigate [...] Read more.
Total variation (TV) is widely used in many image processing problems including the regularization of optical flow estimation. In order to deal with non differentiability of the TV regularization term, smooth approximations have been considered in the literature. In this paper, we investigate the use of three known smooth TV approximations, namely: the Charbonnier, Huber and Green functions. We establish the maximum theoretical error of these approximations and discuss their performance evaluation when applied to the optical flow problem. Full article
Show Figures

Figure 1

23 pages, 7293 KiB  
Article
Illusion and Illusoriness of Color and Coloration
by Baingio Pinna, Daniele Porcheddu and Katia Deiana
J. Imaging 2018, 4(2), 30; https://doi.org/10.3390/jimaging4020030 - 30 Jan 2018
Cited by 2 | Viewed by 13496
Abstract
In this work, through a phenomenological analysis, we studied the perception of the chromatic illusion and illusoriness. The necessary condition for an illusion to occur is the discovery of a mismatch/disagreement between the geometrical/physical domain and the phenomenal one. The illusoriness is instead [...] Read more.
In this work, through a phenomenological analysis, we studied the perception of the chromatic illusion and illusoriness. The necessary condition for an illusion to occur is the discovery of a mismatch/disagreement between the geometrical/physical domain and the phenomenal one. The illusoriness is instead a phenomenal attribute related to a sense of strangeness, deception, singularity, mendacity, and oddity. The main purpose of this work is to study the phenomenology of chromatic illusion vs. illusoriness, which is useful for shedding new light on the no-man’s land between “sensory” and “cognitive” processes that have not been fully explored. Some basic psychological and biological implications for living organisms are deduced. Full article
(This article belongs to the Special Issue Color Image Processing)
Show Figures

Figure 1

15 pages, 6231 KiB  
Article
Estimating Full Regional Skeletal Muscle Fibre Orientation from B-Mode Ultrasound Images Using Convolutional, Residual, and Deconvolutional Neural Networks
by Ryan Cunningham, María B. Sánchez, Gregory May and Ian Loram
J. Imaging 2018, 4(2), 29; https://doi.org/10.3390/jimaging4020029 - 29 Jan 2018
Cited by 34 | Viewed by 10055
Abstract
This paper presents an investigation into the feasibility of using deep learning methods for developing arbitrary full spatial resolution regression analysis of B-mode ultrasound images of human skeletal muscle. In this study, we focus on full spatial analysis of muscle fibre orientation, since [...] Read more.
This paper presents an investigation into the feasibility of using deep learning methods for developing arbitrary full spatial resolution regression analysis of B-mode ultrasound images of human skeletal muscle. In this study, we focus on full spatial analysis of muscle fibre orientation, since there is an existing body of work with which to compare results. Previous attempts to automatically estimate fibre orientation from ultrasound are not adequate, often requiring manual region selection, feature engineering, providing low-resolution estimations (one angle per muscle) and deep muscles are often not attempted. We build upon our previous work in which automatic segmentation was used with plain convolutional neural network (CNN) and deep residual convolutional network (ResNet) architectures, to predict a low-resolution map of fibre orientation in extracted muscle regions. Here, we use deconvolutions and max-unpooling (DCNN) to regularise and improve predicted fibre orientation maps for the entire image, including deep muscles, removing the need for automatic segmentation and we compare our results with the CNN and ResNet, as well as a previously established feature engineering method, on the same task. Dynamic ultrasound images sequences of the calf muscles were acquired (25 Hz) from 8 healthy volunteers (4 male, ages: 25–36, median 30). A combination of expert annotation and interpolation/extrapolation provided labels of regional fibre orientation for each image. Neural networks (CNN, ResNet, DCNN) were then trained both with and without dropout using leave one out cross-validation. Our results demonstrated robust estimation of full spatial fibre orientation within approximately 6° error, which was an improvement on previous methods. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Show Figures

Figure 1

17 pages, 1859 KiB  
Article
Exploiting Multiple Detections for Person Re-Identification
by Amran Bhuiyan, Alessandro Perina and Vittorio Murino
J. Imaging 2018, 4(2), 28; https://doi.org/10.3390/jimaging4020028 - 23 Jan 2018
Cited by 13 | Viewed by 5227
Abstract
Re-identification systems aim at recognizing the same individuals in multiple cameras, and one of the most relevant problems is that the appearance of same individual varies across cameras due to illumination and viewpoint changes. This paper proposes the use of cumulative weighted brightness [...] Read more.
Re-identification systems aim at recognizing the same individuals in multiple cameras, and one of the most relevant problems is that the appearance of same individual varies across cameras due to illumination and viewpoint changes. This paper proposes the use of cumulative weighted brightness transfer functions (CWBTFs) to model these appearance variations. Different from recently proposed methods which only consider pairs of images to learn a brightness transfer function, we exploit such a multiple-frame-based learning approach that leverages consecutive detections of each individual to transfer the appearance. We first present a CWBTF framework for the task of transforming appearance from one camera to another. We then present a re-identification framework where we segment the pedestrian images into meaningful parts and extract features from such parts, as well as from the whole body. Jointly, both of these frameworks contribute to model the appearance variations more robustly. We tested our approach on standard multi-camera surveillance datasets, showing consistent and significant improvements over existing methods on three different datasets without any other additional cost. Our approach is general and can be applied to any appearance-based method. Full article
Show Figures

Figure 1

12 pages, 2874 KiB  
Article
A New Binarization Algorithm for Historical Documents
by Marcos Almeida, Rafael Dueire Lins, Rodrigo Bernardino, Darlisson Jesus and Bruno Lima
J. Imaging 2018, 4(2), 27; https://doi.org/10.3390/jimaging4020027 - 23 Jan 2018
Cited by 14 | Viewed by 7730
Abstract
Monochromatic documents claim for much less computer bandwidth for network transmission and storage space than their color or even grayscale equivalent. The binarization of historical documents is far more complex than recent ones as paper aging, color, texture, translucidity, stains, back-to-front interference, kind [...] Read more.
Monochromatic documents claim for much less computer bandwidth for network transmission and storage space than their color or even grayscale equivalent. The binarization of historical documents is far more complex than recent ones as paper aging, color, texture, translucidity, stains, back-to-front interference, kind and color of ink used in handwriting, printing process, digitalization process, etc. are some of the factors that affect binarization. This article presents a new binarization algorithm for historical documents. The new global filter proposed is performed in four steps: filtering the image using a bilateral filter, splitting image into the RGB components, decision-making for each RGB channel based on an adaptive binarization method inspired by Otsu’s method with a choice of the threshold level, and classification of the binarized images to decide which of the RGB components best preserved the document information in the foreground. The quantitative and qualitative assessment made with 23 binarization algorithms in three sets of “real world” documents showed very good results. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

9 pages, 3729 KiB  
Article
Studies of Ancient Russian Cultural Objects Using the Neutron Tomography Method
by Sergey Kichanov, Irina Saprykina, Denis Kozlenko, Kuanysh Nazarov, Evgenii Lukin, Anton Rutkauskas and Boris Savenko
J. Imaging 2018, 4(2), 25; https://doi.org/10.3390/jimaging4020025 - 23 Jan 2018
Cited by 16 | Viewed by 6120
Abstract
Neutron radiography and tomography is a non-destructive method that provides detailed information about the internal structure of cultural heritage objects. The differences in the neutron attenuation coefficients of constituent elements of the studied objects, as well as the application of modern mathematical algorithms [...] Read more.
Neutron radiography and tomography is a non-destructive method that provides detailed information about the internal structure of cultural heritage objects. The differences in the neutron attenuation coefficients of constituent elements of the studied objects, as well as the application of modern mathematical algorithms to carry out three-dimensional imaging data analysis, allow one to obtain unique information about the spatial distribution of different phases, the presence of internal defects, or the degree of structural degradation inside valuable cultural objects. The results of the neutron studies of several archaeological objects related to different epochs of the Russian history are reported in order to demonstrate the opportunities provided by the neutron tomography method. The obtained 3D structural volume data, as well as the results of the corresponding data analysis, are presented. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop