Previous Issue

Table of Contents

J. Imaging, Volume 3, Issue 4 (December 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-27
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Towards a Novel Approach for Tumor Volume Quantification
J. Imaging 2017, 3(4), 41; doi:10.3390/jimaging3040041
Received: 18 July 2017 / Revised: 17 September 2017 / Accepted: 21 September 2017 / Published: 27 September 2017
PDF Full-text (759 KB) | HTML Full-text | XML Full-text
Abstract
In medical image processing, evaluating the variations of lesion volume plays a major role in many medical applications. It helps radiologists to follow-up with patients and examine the effects of therapy. Several approaches have been proposed to meet with medical expectations. The present
[...] Read more.
In medical image processing, evaluating the variations of lesion volume plays a major role in many medical applications. It helps radiologists to follow-up with patients and examine the effects of therapy. Several approaches have been proposed to meet with medical expectations. The present work comes within this context. We present a new approach based on the local dissimilarity volume (LDV) that is a 3D representation of the local dissimilarity map (LDM). This map presents a useful means to compare two images, offering a localization of information. We proved the effectiveness of this method (LDV) compared to medical techniques used by radiologists. The result of simulations shows that we can quantify lesion volume by using the LDV method, which is an efficient way to calculate and localize the volume variation of anomalies. It allowed a time savings with the compete satisfaction of an expert during the medical treatment. Full article
(This article belongs to the Special Issue Nanoparticles and Medical Imaging for Image Guided Medicine)
Figures

Figure 1

Open AccessArticle Monitoring of the Nirano Mud Volcanoes Regional Natural Reserve (North Italy) using Unmanned Aerial Vehicles and Terrestrial Laser Scanning
J. Imaging 2017, 3(4), 42; doi:10.3390/jimaging3040042
Received: 27 June 2017 / Revised: 5 September 2017 / Accepted: 13 September 2017 / Published: 30 September 2017
PDF Full-text (36587 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In the last years, measurement instruments and techniques for three-dimensional mapping as Terrestrial Laser Scanning (TLS) and photogrammetry from Unmanned Aerial Vehicles (UAV) are being increasingly used to monitor topographic changes on particular geological features such as volcanic areas. In addition, topographic instruments
[...] Read more.
In the last years, measurement instruments and techniques for three-dimensional mapping as Terrestrial Laser Scanning (TLS) and photogrammetry from Unmanned Aerial Vehicles (UAV) are being increasingly used to monitor topographic changes on particular geological features such as volcanic areas. In addition, topographic instruments such as Total Station Theodolite (TST) and GPS receivers can be used to obtain precise elevation and coordinate position data measuring fixed points both inside and outside the area interested by volcanic activity. In this study, the integration of these instruments has helped to obtain several types of data to monitor both the variations in heights of extrusive edifices within the mud volcano field of the Nirano Regional Natural Reserve (Northern Italy), as well as to study the mechanism of micro-fracturing and the evolution of mud flows and volcanic cones with very high accuracy by 3D point clouds surface analysis and digitization. The large amount of data detected were also analysed to derive morphological information about mud-cracks and surface roughness. This contribution is focused on methods and analysis performed using measurement instruments as TLS and UAV to study and monitoring the main volcanic complexes of the Nirano Natural Reserve as part of a research project, which also involves other studies addressing gases and acoustic measurements, mineralogical and paleontological analysis, organized by the University of Modena and Reggio Emilia in collaboration with the Municipality of Fiorano Modenese. Full article
(This article belongs to the Special Issue 3D Imaging)
Figures

Figure 1

Open AccessArticle Color Texture Image Retrieval Based on Local Extrema Features and Riemannian Distance
J. Imaging 2017, 3(4), 43; doi:10.3390/jimaging3040043
Received: 28 August 2017 / Revised: 2 October 2017 / Accepted: 5 October 2017 / Published: 10 October 2017
PDF Full-text (9074 KB) | HTML Full-text | XML Full-text
Abstract
A novel efficient method for content-based image retrieval (CBIR) is developed in this paper using both texture and color features. Our motivation is to represent and characterize an input image by a set of local descriptors extracted from characteristic points (i.e., keypoints) within
[...] Read more.
A novel efficient method for content-based image retrieval (CBIR) is developed in this paper using both texture and color features. Our motivation is to represent and characterize an input image by a set of local descriptors extracted from characteristic points (i.e., keypoints) within the image. Then, dissimilarity measure between images is calculated based on the geometric distance between the topological feature spaces (i.e., manifolds) formed by the sets of local descriptors generated from each image of the database. In this work, we propose to extract and use the local extrema pixels as our feature points. Then, the so-called local extrema-based descriptor (LED) is generated for each keypoint by integrating all color, spatial as well as gradient information captured by its nearest local extrema. Hence, each image is encoded by an LED feature point cloud and Riemannian distances between these point clouds enable us to tackle CBIR. Experiments performed on several color texture databases including Vistex, STex, color Brodazt, USPtex and Outex TC-00013 using the proposed approach provide very efficient and competitive results compared to the state-of-the-art methods. Full article
Figures

Figure 1

Open AccessArticle Baseline Fusion for Image and Pattern Recognition—What Not to Do (and How to Do Better)
J. Imaging 2017, 3(4), 44; doi:10.3390/jimaging3040044
Received: 18 July 2017 / Revised: 30 September 2017 / Accepted: 2 October 2017 / Published: 11 October 2017
PDF Full-text (1554 KB) | HTML Full-text | XML Full-text
Abstract
The ever-increasing demand for a reliable inference capable of handling unpredictable challenges of practical application in the real world has made research on information fusion of major importance; indeed, this challenge is pervasive in a whole range of image understanding tasks. In the
[...] Read more.
The ever-increasing demand for a reliable inference capable of handling unpredictable challenges of practical application in the real world has made research on information fusion of major importance; indeed, this challenge is pervasive in a whole range of image understanding tasks. In the development of the most common type—score-level fusion algorithms—it is virtually universally desirable to have as a reference starting point a simple and universally sound baseline benchmark which newly developed approaches can be compared to. One of the most pervasively used methods is that of weighted linear fusion. It has cemented itself as the default off-the-shelf baseline owing to its simplicity of implementation, interpretability, and surprisingly competitive performance across a widest range of application domains and information source types. In this paper I argue that despite this track record, weighted linear fusion is not a good baseline on the grounds that there is an equally simple and interpretable alternative—namely quadratic mean-based fusion—which is theoretically more principled and which is more successful in practice. I argue the former from first principles and demonstrate the latter using a series of experiments on a diverse set of fusion problems: classification using synthetically generated data, computer vision-based object recognition, arrhythmia detection, and fatality prediction in motor vehicle accidents. On all of the aforementioned problems and in all instances, the proposed fusion approach exhibits superior performance over linear fusion, often increasing class separation by several orders of magnitude. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Figures

Figure 1

Open AccessArticle The Accuracy of 3D Optical Reconstruction and Additive Manufacturing Processes in Reproducing Detailed Subject-Specific Anatomy
J. Imaging 2017, 3(4), 45; doi:10.3390/jimaging3040045
Received: 31 August 2017 / Revised: 27 September 2017 / Accepted: 6 October 2017 / Published: 12 October 2017
PDF Full-text (6506 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
3D reconstruction and 3D printing of subject-specific anatomy is a promising technology for supporting clinicians in the visualisation of disease progression and planning for surgical intervention. In this context, the 3D model is typically obtained from segmentation of magnetic resonance imaging (MRI), computed
[...] Read more.
3D reconstruction and 3D printing of subject-specific anatomy is a promising technology for supporting clinicians in the visualisation of disease progression and planning for surgical intervention. In this context, the 3D model is typically obtained from segmentation of magnetic resonance imaging (MRI), computed tomography (CT) or echocardiography images. Although these modalities allow imaging of the tissues in vivo, assessment of quality of the reconstruction is limited by the lack of a reference geometry as the subject-specific anatomy is unknown prior to image acquisition. In this work, an optical method based on 3D digital image correlation (3D-DIC) techniques is used to reconstruct the shape of the surface of an ex vivo porcine heart. This technique requires two digital charge-coupled device (CCD) cameras to provide full-field shape measurements and to generate a standard tessellation language (STL) file of the sample surface. The aim of this work was to quantify the error of 3D-DIC shape measurements using the additive manufacturing process. The limitations of 3D printed object resolution, the discrepancy in reconstruction of the surface of cardiac soft tissue and a 3D printed model of the same surface were evaluated. The results obtained demonstrated the ability of the 3D-DIC technique to reconstruct localised and detailed features on the cardiac surface with sub-millimeter accuracy. Full article
(This article belongs to the Special Issue Three-Dimensional Printing and Imaging)
Figures

Figure 1

Open AccessArticle Computationally Efficient Robust Color Image Watermarking Using Fast Walsh Hadamard Transform
J. Imaging 2017, 3(4), 46; doi:10.3390/jimaging3040046
Received: 8 September 2017 / Revised: 3 October 2017 / Accepted: 10 October 2017 / Published: 13 October 2017
PDF Full-text (14461 KB) | HTML Full-text | XML Full-text
Abstract
Watermark is the copy deterrence mechanism used in the multimedia signal that is to be protected from hacking and piracy such a way that it can later be extracted from the watermarked signal by the decoder. Watermarking can be used in various applications
[...] Read more.
Watermark is the copy deterrence mechanism used in the multimedia signal that is to be protected from hacking and piracy such a way that it can later be extracted from the watermarked signal by the decoder. Watermarking can be used in various applications such as authentication, video indexing, copyright protection and access control. In this paper a new CDMA (Code Division Multiple Access) based robust watermarking algorithm using customized 8 × 8 Walsh Hadamard Transform, is proposed for the color images and detailed performance and robustness analysis have been performed. The paper studies in detail the effect of spreading code length, number of spreading codes and type of spreading codes on the performance of the watermarking system. Compared to the existing techniques the proposed scheme is computationally more efficient and consumes much less time for execution. Furthermore, the proposed scheme is robust and survives most of the common signal processing and geometric attacks. Full article
Figures

Figure 1

Open AccessArticle Detection and Classification of Land Crude Oil Spills Using Color Segmentation and Texture Analysis
J. Imaging 2017, 3(4), 47; doi:10.3390/jimaging3040047
Received: 18 August 2017 / Revised: 9 September 2017 / Accepted: 13 September 2017 / Published: 19 October 2017
PDF Full-text (6086 KB) | HTML Full-text | XML Full-text
Abstract
Crude oil spills have negative consequences on the economy, environment, health and society in which they occur, and the severity of the consequences depends on how quickly these spills are detected once they begin. Several methods have been employed for spill detection, including
[...] Read more.
Crude oil spills have negative consequences on the economy, environment, health and society in which they occur, and the severity of the consequences depends on how quickly these spills are detected once they begin. Several methods have been employed for spill detection, including real time remote surveillance by flying aircrafts with surveillance teams. Other methods employ various sensors, including visible sensors. This paper presents an algorithm to automatically detect the presence of crude oil spills in images acquired using visible light sensors. Images of crude oil spills used in the development of the algorithm were obtained from the Shell Petroleum Development Company (SPDC) Nigeria website The major steps of the detection algorithm are image preprocessing, crude oil color segmentation, sky elimination segmentation, Region of Interest (ROI) extraction, ROI texture feature extraction, and ROI texture feature analysis and classification. The algorithm was developed using 25 sample images containing crude oil spills and demonstrated a sensitivity of 92% and an FPI of 1.43. The algorithm was further tested on a set of 56 case images and demonstrated a sensitivity of 82% and an FPI of 0.66. This algorithm can be incorporated into spill detection systems that utilize visible sensors for early detection of crude oil spills. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Exemplar-Based Face Colorization Using Image Morphing
J. Imaging 2017, 3(4), 48; doi:10.3390/jimaging3040048
Received: 30 May 2017 / Revised: 18 September 2017 / Accepted: 19 October 2017 / Published: 31 October 2017
PDF Full-text (17606 KB) | HTML Full-text | XML Full-text
Abstract
Colorization of gray-scale images relies on prior color information. Exemplar-based methods use a color image as source of such information. Then the colors of the source image are transferred to the gray-scale target image. In the literature, this transfer is mainly guided by
[...] Read more.
Colorization of gray-scale images relies on prior color information. Exemplar-based methods use a color image as source of such information. Then the colors of the source image are transferred to the gray-scale target image. In the literature, this transfer is mainly guided by texture descriptors. Face images usually contain few texture so that the common approaches frequently fail. In this paper, we propose a new method taking the geometric structure of the images rather their texture into account such that it is more reliable for faces. Our approach is based on image morphing and relies on the YUV color space. First, a correspondence mapping between the luminance Y channel of the color source image and the gray-scale target image is computed. This mapping is based on the time discrete metamorphosis model suggested by Berkels, Effland and Rumpf. We provide a new finite difference approach for the numerical computation of the mapping. Then, the chrominance U,V channels of the source image are transferred via this correspondence map to the target image. A possible postprocessing step by a variational model is developed to further improve the results. To keep the contrast special attention is paid to make the postprocessing unbiased. Our numerical experiments show that our morphing based approach clearly outperforms state-of-the-art methods. Full article
(This article belongs to the Special Issue Color Image Processing)
Figures

Figure 1

Open AccessArticle Preliminary Tests and Results Concerning Integration of Sentinel-2 and Landsat-8 OLI for Crop Monitoring
J. Imaging 2017, 3(4), 49; doi:10.3390/jimaging3040049
Received: 12 September 2017 / Revised: 2 November 2017 / Accepted: 3 November 2017 / Published: 5 November 2017
PDF Full-text (5563 KB) | HTML Full-text | XML Full-text
Abstract
The Sentinel-2 data by European Space Agency were recently made available for free. Their technical features suggest synergies with Landsat-8 dataset by NASA (National Aeronautics and Space Administration), especially in the agriculture context were observations should be as dense as possible to give
[...] Read more.
The Sentinel-2 data by European Space Agency were recently made available for free. Their technical features suggest synergies with Landsat-8 dataset by NASA (National Aeronautics and Space Administration), especially in the agriculture context were observations should be as dense as possible to give a rather complete description of macro-phenology of crops. In this work some preliminary results are presented concerning geometric and spectral consistency of the two compared datasets. Tests were performed specifically focusing on the agriculture-devoted part of Piemonte Region (NW Italy). Geometric consistencies of Sentinel-2 and Landsat-8 datasets were tested “absolutely” (in respect of a selected reference frame) and “relatively” (one in respect of the other) by selecting, respectively, 160 and 100 well distributed check points. Spectral differences affecting at-the-ground reflectance were tested after images calibration performed by dark object subtraction approach. A special focus was on differences affecting derivable NDVI and NDWI spectral indices, being the most widely used in the agriculture remote sensing application context. Results are encouraging and suggest that this approach can successfully enter the ordinary remote sensing-supported precision farming workflow. Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
Figures

Figure 1

Open AccessArticle Sensing Light with LEDs: Performance Evaluation for IoT Applications
J. Imaging 2017, 3(4), 50; doi:10.3390/jimaging3040050
Received: 30 September 2017 / Revised: 5 November 2017 / Accepted: 5 November 2017 / Published: 12 November 2017
PDF Full-text (1135 KB) | HTML Full-text | XML Full-text
Abstract
The Internet of Things includes all the technologies allowing the connection of everyday objects to the Internet, in order to gather measurements of physical quantities and interact with the surrounding environments through telecommunication devices with embedded sensing and actuating units. The measurements carried
[...] Read more.
The Internet of Things includes all the technologies allowing the connection of everyday objects to the Internet, in order to gather measurements of physical quantities and interact with the surrounding environments through telecommunication devices with embedded sensing and actuating units. The measurements carried out with different LEDs demonstrate the possibility of using these devices both as transmitters and as optical sensors, in addition to their ability to discriminate incident wavelengths, thus making them bi-directional transceivers for Internet of Things (IoT) applications, particularly suitable in the context of Visible Light Communication (VLC). In particular, a methodological tool is provided for selecting the LED sensor for VLC applications. Full article
(This article belongs to the Special Issue Imaging in Internet of Things)
Figures

Figure 1

Open AccessArticle Alpha Channel Fragile Watermarking for Color Image Integrity Protection
J. Imaging 2017, 3(4), 53; doi:10.3390/jimaging3040053
Received: 11 October 2017 / Revised: 6 November 2017 / Accepted: 17 November 2017 / Published: 23 November 2017
PDF Full-text (2535 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a fragile watermarking algorithm`m for the protection of the integrity of color images with alpha channel. The system is able to identify modified areas with very high probability, even with small color or transparency changes. The main characteristic of the
[...] Read more.
This paper presents a fragile watermarking algorithm`m for the protection of the integrity of color images with alpha channel. The system is able to identify modified areas with very high probability, even with small color or transparency changes. The main characteristic of the algorithm is the embedding of the watermark by modifying the alpha channel, leaving the color channels untouched and introducing a very small error with respect to the host image. As a consequence, the resulting watermarked images have a very high peak signal-to-noise ratio. The security of the algorithm is based on a secret key defining the embedding space in which the watermark is inserted by means of the Karhunen–Loève transform (KLT) and a genetic algorithm (GA). Its high sensitivity to modifications is shown, proving the security of the whole system. Full article
Figures

Figure 1

Open AccessArticle Android-Based Verification System for Banknotes
J. Imaging 2017, 3(4), 54; doi:10.3390/jimaging3040054
Received: 10 October 2017 / Revised: 17 November 2017 / Accepted: 20 November 2017 / Published: 24 November 2017
PDF Full-text (4676 KB) | HTML Full-text | XML Full-text
Abstract
With the advancement in imaging technologies for scanning and printing, production of counterfeit banknotes has become cheaper, easier, and more common. The proliferation of counterfeit banknotes causes loss to banks, traders, and individuals involved in financial transactions. Hence, it is inevitably needed that
[...] Read more.
With the advancement in imaging technologies for scanning and printing, production of counterfeit banknotes has become cheaper, easier, and more common. The proliferation of counterfeit banknotes causes loss to banks, traders, and individuals involved in financial transactions. Hence, it is inevitably needed that efficient and reliable techniques for detection of counterfeit banknotes should be developed. With the availability of powerful smartphones, it has become possible to perform complex computations and image processing related tasks on these phones. In addition to this, smartphone users have increased greatly and numbers continue to increase. This is a great motivating factor for researchers and developers to propose innovative mobile-based solutions. In this study, a novel technique for verification of Pakistani banknotes is developed, targeting smartphones with android platform. The proposed technique is based on statistical features, and surface roughness of a banknote, representing different properties of the banknote, such as paper material, printing ink, paper quality, and surface roughness. The selection of these features is motivated by the X-ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) analysis of genuine and counterfeit banknotes. In this regard, two important areas of the banknote, i.e., serial number and flag portions were considered since these portions showed the maximum difference between genuine and counterfeit banknote. The analysis confirmed that genuine and counterfeit banknotes are very different in terms of the printing process, the ingredients used in preparation of banknotes, and the quality of the paper. After extracting the discriminative set of features, support vector machine is used for classification. The experimental results confirm the high accuracy of the proposed technique. Full article
Figures

Figure 1

Open AccessArticle Modelling of Orthogonal Craniofacial Profiles
J. Imaging 2017, 3(4), 55; doi:10.3390/jimaging3040055
Received: 20 October 2017 / Revised: 18 November 2017 / Accepted: 23 November 2017 / Published: 30 November 2017
PDF Full-text (2056 KB) | HTML Full-text | XML Full-text
Abstract
We present a fully-automatic image processing pipeline to build a set of 2D morphable models of three craniofacial profiles from orthogonal viewpoints, side view, front view and top view, using a set of 3D head surface images. Subjects in this dataset wear a
[...] Read more.
We present a fully-automatic image processing pipeline to build a set of 2D morphable models of three craniofacial profiles from orthogonal viewpoints, side view, front view and top view, using a set of 3D head surface images. Subjects in this dataset wear a close-fitting latex cap to reveal the overall skull shape. Texture-based 3D pose normalization and facial landmarking are applied to extract the profiles from 3D raw scans. Fully-automatic profile annotation, subdivision and registration methods are used to establish dense correspondence among sagittal profiles. The collection of sagittal profiles in dense correspondence are scaled and aligned using Generalised Procrustes Analysis (GPA), before applying principal component analysis to generate a morphable model. Additionally, we propose a new alternative alignment called the Ellipse Centre Nasion (ECN) method. Our model is used in a case study of craniosynostosis intervention outcome evaluation, and the evaluation reveals that the proposed model achieves state-of-the-art results. We make publicly available both the morphable models and the profile dataset used to construct it. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Rapid Interactive and Intuitive Segmentation of 3D Medical Images Using Radial Basis Function Interpolation
J. Imaging 2017, 3(4), 56; doi:10.3390/jimaging3040056
Received: 18 October 2017 / Revised: 25 November 2017 / Accepted: 28 November 2017 / Published: 30 November 2017
PDF Full-text (2167 KB) | HTML Full-text | XML Full-text
Abstract
Segmentation is one of the most important parts of medical image analysis. Manual segmentation is very cumbersome, time-consuming, and prone to inter-observer variability. Fully automatic segmentation approaches require a large amount of labeled training data and may fail in difficult or abnormal cases.
[...] Read more.
Segmentation is one of the most important parts of medical image analysis. Manual segmentation is very cumbersome, time-consuming, and prone to inter-observer variability. Fully automatic segmentation approaches require a large amount of labeled training data and may fail in difficult or abnormal cases. In this work, we propose a new method for 2D segmentation of individual slices and 3D interpolation of the segmented slices. The Smart Brush functionality quickly segments the region of interest in a few 2D slices. Given these annotated slices, our adapted formulation of Hermite radial basis functions reconstructs the 3D surface. Effective interactions with less number of equations accelerate the performance and, therefore, a real-time and an intuitive, interactive segmentation of 3D objects can be supported effectively. The proposed method is evaluated on 12 clinical 3D magnetic resonance imaging data sets and are compared to gold standard annotations of the left ventricle from a clinical expert. The automatic evaluation of the 2D Smart Brush resulted in an average Dice coefficient of 0.88 ± 0.09 for the individual slices. For the 3D interpolation using Hermite radial basis functions, an average Dice coefficient of 0.94 ± 0.02 is achieved. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Open AccessArticle Olive Plantation Mapping on a Sub-Tree Scale with Object-Based Image Analysis of Multispectral UAV Data; Operational Potential in Tree Stress Monitoring
J. Imaging 2017, 3(4), 57; doi:10.3390/jimaging3040057
Received: 12 September 2017 / Revised: 26 November 2017 / Accepted: 30 November 2017 / Published: 4 December 2017
PDF Full-text (3843 KB) | HTML Full-text | XML Full-text
Abstract
The objective of this study was to develop a methodology for mapping olive plantations on a sub-tree scale. For this purpose, multispectral imagery of an almost 60-ha plantation in Greece was acquired with an Unmanned Aerial Vehicle. Objects smaller than the tree crown
[...] Read more.
The objective of this study was to develop a methodology for mapping olive plantations on a sub-tree scale. For this purpose, multispectral imagery of an almost 60-ha plantation in Greece was acquired with an Unmanned Aerial Vehicle. Objects smaller than the tree crown were produced with image segmentation. Three image features were indicated as optimum for discriminating olive trees from other objects in the plantation, in a rule-based classification algorithm. After limited manual corrections, the final output was validated by an overall accuracy of 93%. The overall processing chain can be considered as suitable for operational olive tree monitoring for potential stresses. Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
Figures

Figure 1

Open AccessArticle Neutron Imaging of Laser Melted SS316 Test Objects with Spatially Resolved Small Angle Neutron Scattering
J. Imaging 2017, 3(4), 58; doi:10.3390/jimaging3040058
Received: 31 October 2017 / Revised: 30 November 2017 / Accepted: 1 December 2017 / Published: 5 December 2017
PDF Full-text (1515 KB) | HTML Full-text | XML Full-text
Abstract
A novel neutron far field interferometer is explored for sub-micron porosity detection in laser sintered stainless steel alloy 316 (SS316) test objects. The results shown are images and volumes of the first quantitative neutron dark-field tomography at various autocorrelation lengths, ξ. In
[...] Read more.
A novel neutron far field interferometer is explored for sub-micron porosity detection in laser sintered stainless steel alloy 316 (SS316) test objects. The results shown are images and volumes of the first quantitative neutron dark-field tomography at various autocorrelation lengths, ξ . In this preliminary work, the beam defining slits were adjusted to an uncalibrated opening of 0.5 mm horizontal and 5 cm vertical; the images are blurred along the vertical direction. In spite of the blurred attenuation images, the dark-field images reveal structural information at the micron-scale. The topics explored include: the accessible size range of defects, potentially 338 nm to 4.5 μ m, that can be imaged with the small angle scattering images; the spatial resolution of the attenuation image; the maximum sample dimensions compatible with interferometry optics and neutron attenuation; the procedure for reduction of the raw interferogram images into attenuation, differential phase contrast, and small angle scattering (dark-field) images; and the role of neutron far field interferometry in additive manufacturing to assess sub-micron porosity. Full article
(This article belongs to the Special Issue Neutron Imaging)
Figures

Figure 1

Open AccessArticle Preliminary Results of Clover and Grass Coverage and Total Dry Matter Estimation in Clover-Grass Crops Using Image Analysis
J. Imaging 2017, 3(4), 59; doi:10.3390/jimaging3040059
Received: 22 October 2017 / Revised: 21 November 2017 / Accepted: 30 November 2017 / Published: 6 December 2017
PDF Full-text (7043 KB) | HTML Full-text | XML Full-text
Abstract
The clover-grass ratio is an important factor in composing feed ratios for livestock. Cameras in the field allow the user to estimate the clover-grass ratio using image analysis; however, current methods assume the total dry matter is known. This paper presents the preliminary
[...] Read more.
The clover-grass ratio is an important factor in composing feed ratios for livestock. Cameras in the field allow the user to estimate the clover-grass ratio using image analysis; however, current methods assume the total dry matter is known. This paper presents the preliminary results of an image analysis method for non-destructively estimating the total dry matter of clover-grass. The presented method includes three steps: (1) classification of image illumination using a histogram of the difference in excess green and excess red; (2) segmentation of clover and grass using edge detection and morphology; and (3) estimation of total dry matter using grass coverage derived from the segmentation and climate parameters. The method was developed and evaluated on images captured in a clover-grass plot experiment during the spring growing season. The preliminary results are promising and show a high correlation between the image-based total dry matter estimate and the harvested dry matter ( R 2 = 0.93 ) with an RMSE of 210 kg ha 1 . Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
Figures

Figure 1

Open AccessArticle Performance of the Commercial PP/ZnS:Cu and PP/ZnS:Ag Scintillation Screens for Fast Neutron Imaging
J. Imaging 2017, 3(4), 60; doi:10.3390/jimaging3040060
Received: 31 October 2017 / Revised: 1 December 2017 / Accepted: 7 December 2017 / Published: 10 December 2017
PDF Full-text (2659 KB) | HTML Full-text | XML Full-text
Abstract
Fast neutron imaging has a great potential as a nondestructive technique for testing large objects. The main factor limiting applications of this technique is detection technology, offering relatively poor spatial resolution of images and low detection efficiency, which results in very long exposure
[...] Read more.
Fast neutron imaging has a great potential as a nondestructive technique for testing large objects. The main factor limiting applications of this technique is detection technology, offering relatively poor spatial resolution of images and low detection efficiency, which results in very long exposure times. Therefore, research on development of scintillators for fast neutron imaging is of high importance. A comparison of the light output, gamma radiation sensitivity and spatial resolution of commercially available scintillator screens composed of PP/ZnS:Cu and PP/ZnS:Ag of different thicknesses are presented. The scintillators were provided by RC Tritec AG company and the test performed at the NECTAR facility located at the FRM II nuclear research reactor. It was shown that light output increases and the spatial resolution decreases with the scintillator thickness. Both compositions of the scintillating material provide similar light output, while the gamma sensitivity of PP/ZnS:Cu is significantly higher as compared to PP/ZnS:Ag-based scintillators. Moreover, we report which factors should be considered when choosing a scintillator and what are the limitations of the investigated types of scintillators. Full article
(This article belongs to the Special Issue Neutron Imaging)
Figures

Figure 1

Open AccessArticle Epithelium and Stroma Identification in Histopathological Images Using Unsupervised and Semi-Supervised Superpixel-Based Segmentation
J. Imaging 2017, 3(4), 61; doi:10.3390/jimaging3040061
Received: 27 October 2017 / Revised: 5 December 2017 / Accepted: 6 December 2017 / Published: 11 December 2017
PDF Full-text (6923 KB) | HTML Full-text | XML Full-text
Abstract
We present superpixel-based segmentation frameworks for unsupervised and semi-supervised epithelium-stroma identification in histopathological images or oropharyngeal tissue micro arrays. A superpixel segmentation algorithm is initially used to split-up the image into binary regions (superpixels) and their colour features are extracted and fed into
[...] Read more.
We present superpixel-based segmentation frameworks for unsupervised and semi-supervised epithelium-stroma identification in histopathological images or oropharyngeal tissue micro arrays. A superpixel segmentation algorithm is initially used to split-up the image into binary regions (superpixels) and their colour features are extracted and fed into several base clustering algorithms with various parameter initializations. Two Consensus Clustering (CC) formulations are then used: the Evidence Accumulation Clustering (EAC) and the voting-based consensus function. These combine the base clustering outcomes to obtain a more robust detection of tissue compartments than the base clustering methods on their own. For the voting-based function, a technique is introduced to generate consistent labellings across the base clustering results. The obtained CC result is then utilized to build a self-training Semi-Supervised Classification (SSC) model. Unlike supervised segmentations, which rely on large number of labelled training images, our SSC approach performs a quality segmentation while relying on few labelled samples. Experiments conducted on forty-five hand-annotated images of oropharyngeal cancer tissue microarrays show that (a) the CC algorithm generates more accurate and stable results than individual clustering algorithms; (b) the clustering performance of the voting-based function outperforms the existing EAC; and (c) the proposed SSC algorithm outperforms the supervised methods, which is trained with only a few labelled instances. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Open AccessArticle DocCreator: A New Software for Creating Synthetic Ground-Truthed Document Images
J. Imaging 2017, 3(4), 62; doi:10.3390/jimaging3040062
Received: 30 October 2017 / Revised: 29 November 2017 / Accepted: 5 December 2017 / Published: 11 December 2017
PDF Full-text (25492 KB) | HTML Full-text | XML Full-text
Abstract
Most digital libraries that provide user-friendly interfaces, enabling quick and intuitive access to their resources, are based on Document Image Analysis and Recognition (DIAR) methods. Such DIAR methods need ground-truthed document images to be evaluated/compared and, in some cases, trained. Especially with the
[...] Read more.
Most digital libraries that provide user-friendly interfaces, enabling quick and intuitive access to their resources, are based on Document Image Analysis and Recognition (DIAR) methods. Such DIAR methods need ground-truthed document images to be evaluated/compared and, in some cases, trained. Especially with the advent of deep learning-based approaches, the required size of annotated document datasets seems to be ever-growing. Manually annotating real documents has many drawbacks, which often leads to small reliably annotated datasets. In order to circumvent those drawbacks and enable the generation of massive ground-truthed data with high variability, we present DocCreator, a multi-platform and open-source software able to create many synthetic image documents with controlled ground truth. DocCreator has been used in various experiments, showing the interest of using such synthetic images to enrich the training stage of DIAR tools. Full article
(This article belongs to the Special Issue Document Image Processing)
Figures

Figure 1

Open AccessFeature PaperArticle Mereotopological Correction of Segmentation Errors in Histological Imaging
J. Imaging 2017, 3(4), 63; doi:10.3390/jimaging3040063
Received: 30 October 2017 / Revised: 5 December 2017 / Accepted: 6 December 2017 / Published: 12 December 2017
PDF Full-text (1337 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we describe mereotopological methods to programmatically correct image segmentation errors, in particular those that fail to fulfil expected spatial relations in digitised histological scenes. The proposed approach exploits a spatial logic called discrete mereotopology to integrate a number of qualitative
[...] Read more.
In this paper we describe mereotopological methods to programmatically correct image segmentation errors, in particular those that fail to fulfil expected spatial relations in digitised histological scenes. The proposed approach exploits a spatial logic called discrete mereotopology to integrate a number of qualitative spatial reasoning and constraint satisfaction methods into imaging procedures. Eight mereotopological relations defined on binary region pairs are represented as nodes in a set of 20 directed graphs, where the node-to-node graph edges encode the possible transitions between the spatial relations after set-theoretic and discrete topological operations on the regions are applied. The graphs allow one to identify sequences of operations that applied to regions of a given relation, and enables one to resegment an image that fails to conform to a valid histological model into one that does. Examples of the methods are presented using images of H&E-stained human carcinoma cell line cultures. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Open AccessArticle Deep Learning vs. Conventional Machine Learning: Pilot Study of WMH Segmentation in Brain MRI with Absence or Mild Vascular Pathology
J. Imaging 2017, 3(4), 66; doi:10.3390/jimaging3040066
Received: 7 November 2017 / Revised: 7 December 2017 / Accepted: 12 December 2017 / Published: 14 December 2017
PDF Full-text (3128 KB) | HTML Full-text | XML Full-text
Abstract
In the wake of the use of deep learning algorithms in medical image analysis, we compared performance of deep learning algorithms, namely the deep Boltzmann machine (DBM), convolutional encoder network (CEN) and patch-wise convolutional neural network (patch-CNN), with two conventional machine learning schemes:
[...] Read more.
In the wake of the use of deep learning algorithms in medical image analysis, we compared performance of deep learning algorithms, namely the deep Boltzmann machine (DBM), convolutional encoder network (CEN) and patch-wise convolutional neural network (patch-CNN), with two conventional machine learning schemes: Support vector machine (SVM) and random forest (RF), for white matter hyperintensities (WMH) segmentation on brain MRI with mild or no vascular pathology. We also compared all these approaches with a method in the Lesion Segmentation Tool public toolbox named lesion growth algorithm (LGA). We used a dataset comprised of 60 MRI data from 20 subjects in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, each scanned once every year during three consecutive years. Spatial agreement score, receiver operating characteristic and precision-recall performance curves, volume disagreement score, agreement with intra-/inter-observer reliability measurements and visual evaluation were used to find the best configuration of each learning algorithm for WMH segmentation. By using optimum threshold values for the probabilistic output from each algorithm to produce binary masks of WMH, we found that SVM and RF produced good results for medium to very large WMH burden but deep learning algorithms performed generally better than conventional ones in most evaluations. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Open AccessArticle Restoration of Bi-Contrast MRI Data for Intensity Uniformity with Bayesian Coring of Co-Occurrence Statistics
J. Imaging 2017, 3(4), 67; doi:10.3390/jimaging3040067 (registering DOI)
Received: 30 August 2017 / Revised: 7 December 2017 / Accepted: 12 December 2017 / Published: 15 December 2017
PDF Full-text (1481 KB) | HTML Full-text | XML Full-text
Abstract
The reconstruction of MRI data assumes a uniform radio-frequency field. However, in practice, the radio-frequency field is inhomogeneous and leads to anatomically inconsequential intensity non-uniformities across an image. An anatomic region can be imaged with multiple contrasts reconstructed independently and be suffering from
[...] Read more.
The reconstruction of MRI data assumes a uniform radio-frequency field. However, in practice, the radio-frequency field is inhomogeneous and leads to anatomically inconsequential intensity non-uniformities across an image. An anatomic region can be imaged with multiple contrasts reconstructed independently and be suffering from different non-uniformities. These artifacts can complicate the further automated analysis of the images. A method is presented for the joint intensity uniformity restoration of two such images. The effect of the intensity distortion on the auto-co-occurrence statistics of each image as well as on the joint-co-occurrence statistics of the two images is modeled and used for their non-stationary restoration followed by their back-projection to the images. Several constraints that ensure a stable restoration are also imposed. Moreover, the method considers the inevitable differences between the signal regions of the two images. The method has been evaluated extensively with BrainWeb phantom brain data as well as with brain anatomic data from the Human Connectome Project (HCP) and with data of Parkinson’s disease patients. The performance of the proposed method has been compared with that of the N4ITK tool. The proposed method increases tissues contrast at least 4 . 62 times more than the N4ITK tool for the BrainWeb images. The dynamic range with the N4ITK method for the same images is increased by up to +29.77%, whereas, for the proposed method, it has a corresponding limited decrease of - 1 . 15 % , as expected. The validation has demonstrated the accuracy and stability of the proposed method and hence its ability to reduce the requirements for additional calibration scans. Full article
(This article belongs to the Special Issue Selected Papers from “MIUA 2017”)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview The Academy Color Encoding System (ACES): A Professional Color-Management Framework for Production, Post-Production and Archival of Still and Motion Pictures
J. Imaging 2017, 3(4), 40; doi:10.3390/jimaging3040040
Received: 24 July 2017 / Revised: 12 September 2017 / Accepted: 13 September 2017 / Published: 21 September 2017
PDF Full-text (12308 KB) | HTML Full-text | XML Full-text
Abstract
The Academy of Motion Picture Arts and Sciences has been pivotal in the inception, design and later adoption of a vendor-agnostic and open framework for color management, the Academy Color Encoding System (ACES), targeting theatrical, TV and animation features, but also still-photography and
[...] Read more.
The Academy of Motion Picture Arts and Sciences has been pivotal in the inception, design and later adoption of a vendor-agnostic and open framework for color management, the Academy Color Encoding System (ACES), targeting theatrical, TV and animation features, but also still-photography and image preservation at large. For this reason, the Academy gathered an interdisciplinary group of scientists, technologists, and creatives, to contribute to it so that it is scientifically sound and technically advantageous in solving practical and interoperability problems in the current film production, postproduction and visual-effects (VFX) ecosystem—all while preserving and future-proofing the cinematographers’ and artists’ creative intent as its main objective. In this paper, a review of ACES’ technical specifications is provided, as well as the current status of the project and a recent use case is given, namely that of the first Italian production embracing an end-to-end ACES pipeline. In addition, new ACES components will be introduced and a discussion started about possible uses for long-time preservation of color imaging in video-content heritage. Full article
(This article belongs to the Special Issue Color Image Processing)
Figures

Open AccessReview Nitrogen (N) Mineral Nutrition and Imaging Sensors for Determining N Status and Requirements of Maize
J. Imaging 2017, 3(4), 51; doi:10.3390/jimaging3040051
Received: 18 June 2017 / Revised: 8 November 2017 / Accepted: 9 November 2017 / Published: 14 November 2017
PDF Full-text (237 KB) | HTML Full-text | XML Full-text
Abstract
Nitrogen (N) is one of the most limiting factors for maize (Zea mays L.) production worldwide. Over-fertilization of N may decrease yields and increase NO3 contamination of water. However, low N fertilization will decrease yields. The objective is to optimize
[...] Read more.
Nitrogen (N) is one of the most limiting factors for maize (Zea mays L.) production worldwide. Over-fertilization of N may decrease yields and increase NO3 contamination of water. However, low N fertilization will decrease yields. The objective is to optimize the use of N fertilizers, to excel in yields and preserve the environment. The knowledge of factors affecting the mobility of N in the soil is crucial to determine ways to manage N in the field. Researchers developed several methods to use N efficiently relying on agronomic practices, the use of sensors and the analysis of digital images. These imaging sensors determine N requirements in plants based on changes in Leaf chlorophyll and polyphenolics contents, the Normalized Difference Vegetation Index (NDVI), and the Dark Green Color index (DGCI). Each method revealed limitations and the scope of future research is to draw N recommendations from the Dark Green Color Index (DGCI) technology. Results showed that more effort is needed to develop tools to benefit from DGCI. Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
Open AccessReview Neutron Imaging Facilities in a Global Context
J. Imaging 2017, 3(4), 52; doi:10.3390/jimaging3040052
Received: 30 October 2017 / Revised: 8 November 2017 / Accepted: 17 November 2017 / Published: 21 November 2017
PDF Full-text (2518 KB) | HTML Full-text | XML Full-text
Abstract
Neutron Imaging (NI) has been developed in the last decades from a film-based inspection method for non-destructive observations towards a powerful research tool with many new and competitive methods. The most important technical step forward has been the introduction and optimization of digital
[...] Read more.
Neutron Imaging (NI) has been developed in the last decades from a film-based inspection method for non-destructive observations towards a powerful research tool with many new and competitive methods. The most important technical step forward has been the introduction and optimization of digital imaging detection systems. In this way, direct quantification of the transmission process became possible—the basis for all advanced methods like tomography, phase-contrast imaging and neutron microscopy. Neutron imaging facilities need to be installed at powerful neutron sources (reactors, spallation sources, other accelerator driven systems). High neutron intensity can be used best for either highest spatial, temporal resolution or best image quality. Since the number of such strong sources is decreasing world-wide due to the age of the reactors, the number of NI facilities is limited. There are a few installations with pioneering new concepts and versatile options on the one hand, but also relatively new sources with only limited performance thus far. It will be a challenge to couple the two parts of the community with the aim to install state-of-the-art equipment at the suitable beam ports and develop NI further towards a general research tool. In addition, sources with lower intensity should be equipped with modern installations in order to perform practical work best. Full article
(This article belongs to the Special Issue Neutron Imaging)
Figures

Figure 1

Open AccessFeature PaperReview Small Angle Scattering in Neutron Imaging—A Review
J. Imaging 2017, 3(4), 64; doi:10.3390/jimaging3040064
Received: 6 November 2017 / Revised: 6 December 2017 / Accepted: 8 December 2017 / Published: 13 December 2017
PDF Full-text (1895 KB) | HTML Full-text | XML Full-text
Abstract
Conventional neutron imaging utilizes the beam attenuation caused by scattering and absorption through the materials constituting an object in order to investigate its macroscopic inner structure. Small angle scattering has basically no impact on such images under the geometrical conditions applied. Nevertheless, in
[...] Read more.
Conventional neutron imaging utilizes the beam attenuation caused by scattering and absorption through the materials constituting an object in order to investigate its macroscopic inner structure. Small angle scattering has basically no impact on such images under the geometrical conditions applied. Nevertheless, in recent years different experimental methods have been developed in neutron imaging, which enable to not only generate contrast based on neutrons scattered to very small angles, but to map and quantify small angle scattering with the spatial resolution of neutron imaging. This enables neutron imaging to access length scales which are not directly resolved in real space and to investigate bulk structures and processes spanning multiple length scales from centimeters to tens of nanometers. Full article
(This article belongs to the Special Issue Neutron Imaging)
Figures

Figure 1

Back to Top