Next Issue
Previous Issue

Table of Contents

J. Imaging, Volume 4, Issue 11 (November 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) In this paper, we present the use of mixed-scale dense convolutional neural networks to improve [...] Read more.
View options order results:
result details:
Displaying articles 1-13
Export citation of selected articles as:
Open AccessArticle Bidirectional Reflectance Measurement and Reflection Model Fitting of Complex Materials Using an Image-Based Measurement Setup
J. Imaging 2018, 4(11), 136; https://doi.org/10.3390/jimaging4110136
Received: 8 October 2018 / Revised: 2 November 2018 / Accepted: 16 November 2018 / Published: 20 November 2018
Viewed by 158 | PDF Full-text (3324 KB) | HTML Full-text | XML Full-text
Abstract
Materials with a complex visual appearance, like goniochromatic or non-diffuse, are widely used for the packaging industry. Measuring optical properties of such materials requires a bidirectional approach, and therefore, it is difficult and time consuming to characterize such a material. We investigate the
[...] Read more.
Materials with a complex visual appearance, like goniochromatic or non-diffuse, are widely used for the packaging industry. Measuring optical properties of such materials requires a bidirectional approach, and therefore, it is difficult and time consuming to characterize such a material. We investigate the suitability of using an image-based measurement setup to measure materials with a complex visual appearance and model them using two well-established reflection models, Cook–Torrance and isotropic Ward. It was learned that the complex materials typically used in the print and packaging industry, similar to the ones used in this paper, can be measured bidirectionally using our measurement setup, but with a noticeable error. Furthermore, the performance of the reflection models used in this paper shows big errors colorimetrically, especially for the goniochromatic material measured. Full article
(This article belongs to the Special Issue Material Appearance and Visual Understanding)
Figures

Figure 1

Open AccessArticle 3D Printing Endobronchial Models for Surgical Training and Simulation
J. Imaging 2018, 4(11), 135; https://doi.org/10.3390/jimaging4110135
Received: 16 September 2018 / Revised: 3 November 2018 / Accepted: 14 November 2018 / Published: 16 November 2018
Viewed by 214 | PDF Full-text (2191 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Lung cancer is the leading cause of cancer-related deaths. Many methods and devices help acquire more accurate clinical and localization information during lung interventions and may impact the death rate for lung cancer. However, there is a learning curve for operating these tools
[...] Read more.
Lung cancer is the leading cause of cancer-related deaths. Many methods and devices help acquire more accurate clinical and localization information during lung interventions and may impact the death rate for lung cancer. However, there is a learning curve for operating these tools due to the complex structure of the airway. In this study, we first discuss the creation of a lung phantom model from medical images, which is followed by a comparison of 3D printing in terms of quality and consistency. Two tests were conducted to test the performance of the developed phantom, which was designed for training simulations of the target and ablation processes in endochonchial interventions. The target test was conducted through an electromagnetic tracking catheter with navigation software. An ablation catheter with a recently developed thermochromic ablation gel conducted the ablation test. The results of two tests show that the phantom was very useful for target and ablation simulation. In addition, the thermochromic gel allowed doctors to visualize the ablation zone. Many lung interventions may benefit from custom training or accuracy with the proposed low-cost and patient-specific phantom. Full article
(This article belongs to the Special Issue Image-Guided Medical Robotics)
Figures

Figure 1

Open AccessArticle Algorithms for 3D Particles Characterization Using X-Ray Microtomography in Proppant Crush Test
J. Imaging 2018, 4(11), 134; https://doi.org/10.3390/jimaging4110134
Received: 4 October 2018 / Revised: 4 November 2018 / Accepted: 9 November 2018 / Published: 12 November 2018
Viewed by 216 | PDF Full-text (9038 KB) | HTML Full-text | XML Full-text
Abstract
We present image processing algorithms for a new technique of ceramic proppant crush resistance characterization. To obtain the images of the proppant material before and after the test we used X-ray microtomography. We propose a watershed-based unsupervised algorithm for segmentation of proppant particles,
[...] Read more.
We present image processing algorithms for a new technique of ceramic proppant crush resistance characterization. To obtain the images of the proppant material before and after the test we used X-ray microtomography. We propose a watershed-based unsupervised algorithm for segmentation of proppant particles, as well as a set of parameters for the characterization of 3D particle size, shape, and porosity. An effective approach based on central geometric moments is described. The approach is used for calculation of particles’ form factor, compactness, equivalent ellipsoid axes lengths, and lengths of projections to these axes. Obtained grain size distribution and crush resistance fit the results of conventional test measured by sieves. However, our technique has a remarkable advantage over traditional laboratory method since it allows to trace the destruction at the level of individual particles and their fragments; it grants to analyze morphological features of fines. We also provide an example describing how the approach can be used for verification of statistical hypotheses about the correlation between particles’ parameters and their crushing under load. Full article
Figures

Figure 1

Open AccessArticle Optimal Color Lighting for Scanning Images of Flat Panel Display using Simplex Search
J. Imaging 2018, 4(11), 133; https://doi.org/10.3390/jimaging4110133
Received: 16 July 2018 / Revised: 28 October 2018 / Accepted: 7 November 2018 / Published: 12 November 2018
Viewed by 161 | PDF Full-text (1415 KB) | HTML Full-text | XML Full-text
Abstract
A system for inspecting flat panel displays (FPDs) acquires scanning images using multiline charge-coupled device (CCD) cameras and industrial machine vision. Optical filters are currently installed in front of these inspection systems to obtain high-quality images. However, the combination of optical filters required
[...] Read more.
A system for inspecting flat panel displays (FPDs) acquires scanning images using multiline charge-coupled device (CCD) cameras and industrial machine vision. Optical filters are currently installed in front of these inspection systems to obtain high-quality images. However, the combination of optical filters required is determined manually and by using empirical methods; this is referred to as passive color control. In this study, active color control is proposed for inspecting FPDs. This inspection scheme requires the scanning of images, which is achieved using a mixed color light source and a mixing algorithm. The light source utilizes high-power light emitting diodes (LEDs) of multiple colors and a communication port to dim their level. Mixed light illuminates an active-matrix organic light-emitting diode (AMOLED) panel after passing through a beam expander and after being shaped into a line beam. The image quality is then evaluated using the Tenenbaum gradient after intensity calibration of the scanning images. The dimming levels are determined using the simplex search method which maximizes the image quality. The color of the light was varied after every scan of an AMOLED panel, and the variation was iterated until the image quality approached a local maximization. The number of scans performed was less than 225, while the number of dimming level combinations was 20484. The proposed method can reduce manual tasks in setting-up inspection machines, and hence is useful for the inspection machines in FPD processes. Full article
(This article belongs to the Special Issue Computational Colour Imaging)
Figures

Figure 1

Open AccessArticle Incorporating Surface Elevation Information in UAV Multispectral Images for Mapping Weed Patches
J. Imaging 2018, 4(11), 132; https://doi.org/10.3390/jimaging4110132
Received: 11 September 2018 / Revised: 16 October 2018 / Accepted: 2 November 2018 / Published: 9 November 2018
Viewed by 276 | PDF Full-text (3244 KB) | HTML Full-text | XML Full-text
Abstract
Accurate mapping of weed distribution within a field is a first step towards effective weed management. The aim of this work was to improve the mapping of milk thistle (Silybum marianum) weed patches through unmanned aerial vehicle (UAV) images using auxiliary
[...] Read more.
Accurate mapping of weed distribution within a field is a first step towards effective weed management. The aim of this work was to improve the mapping of milk thistle (Silybum marianum) weed patches through unmanned aerial vehicle (UAV) images using auxiliary layers of information, such as spatial texture and estimated vegetation height from the UAV digital surface model. UAV multispectral images acquired in the visible and near-infrared parts of the spectrum were used as the main source of data, together with texture that was estimated for the image bands using a local variance filter. The digital surface model was created from structure from motion algorithms using the UAV image stereopairs. From this layer, the terrain elevation was estimated using a focal minimum filter followed by a low-pass filter. The plant height was computed by subtracting the terrain elevation from the digital surface model. Three classification algorithms (maximum likelihood, minimum distance and an object-based image classifier) were used to identify S. marianum from other vegetation using various combinations of inputs: image bands, texture and plant height. The resulting weed distribution maps were evaluated for their accuracy using field-surveyed data. Both texture and plant height have helped improve the accuracy of classification of S. marianum weed, increasing the overall accuracy of classification from 70% to 87% in 2015, and from 82% to 95% in 2016. Thus, as texture is easier to compute than plant height from a digital surface model, it may be preferable to be used in future weed mapping applications. Full article
(This article belongs to the Special Issue Remote and Proximal Sensing Applications in Agriculture)
Figures

Figure 1

Open AccessArticle Two-Dimensional Orthonormal Tree-Structured Haar Transform for Fast Block Matching
J. Imaging 2018, 4(11), 131; https://doi.org/10.3390/jimaging4110131
Received: 14 September 2018 / Revised: 22 October 2018 / Accepted: 31 October 2018 / Published: 7 November 2018
Viewed by 157 | PDF Full-text (468 KB) | HTML Full-text | XML Full-text
Abstract
The goal of block matching (BM) is to locate small patches of an image that are similar to a given patch or template. This can be done either in the spatial domain or, more efficiently, in a transform domain. Full search (FS) BM
[...] Read more.
The goal of block matching (BM) is to locate small patches of an image that are similar to a given patch or template. This can be done either in the spatial domain or, more efficiently, in a transform domain. Full search (FS) BM is an accurate, but computationally expensive procedure. Recently introduced orthogonal Haar transform (OHT)-based BM method significantly reduces the computational complexity of FS method. However, it cannot be used in applications where the patch size is not a power of two. In this paper, we generalize OHT-based BM to an arbitrary patch size, introducing a new BM algorithm based on a 2D orthonormal tree-structured Haar transform (OTSHT). Basis images of OHT are uniquely determined from the full balanced binary tree, whereas various OTSHTs can be constructed from any binary tree. Computational complexity of BM depends on a specific design of OTSHT. We compare BM based on OTSHTs to FS and OHT (for restricted patch sizes) within the framework of image denoising, using WNNM as a denoiser. Experimental results on eight grayscale test images corrupted by additive white Gaussian noise with five noise levels demonstrate that WNNM with OTSHT-based BM outperforms other methods both computationally and qualitatively. Full article
(This article belongs to the Special Issue Mathematical and Computational Methods in Image Processing)
Figures

Figure 1

Open AccessArticle Laser Scanners for High-Quality 3D and IR Imaging in Cultural Heritage Monitoring and Documentation
J. Imaging 2018, 4(11), 130; https://doi.org/10.3390/jimaging4110130
Received: 24 August 2018 / Revised: 30 October 2018 / Accepted: 1 November 2018 / Published: 5 November 2018
Viewed by 180 | PDF Full-text (11373 KB) | HTML Full-text | XML Full-text
Abstract
Digital tools as 3D (three-dimensional) modelling and imaging techniques are having an increasing role in many applicative fields, thanks to some significative features, such as their powerful communicative capacity, versatility of the results and non-invasiveness. These properties are very important in cultural heritage,
[...] Read more.
Digital tools as 3D (three-dimensional) modelling and imaging techniques are having an increasing role in many applicative fields, thanks to some significative features, such as their powerful communicative capacity, versatility of the results and non-invasiveness. These properties are very important in cultural heritage, and modern methodologies provide an efficient means for analyzing deeply and virtually rendering artworks without contact or damage. In this paper, we present two laser scanner prototypes based on the Imaging Topological Radar (ITR) technology developed at the ENEA Research Center of Frascati (RM, Italy) to obtain 3D models and IR images of medium/large targets with the use of laser sources without the need for scaffolding and independently from illumination conditions. The RGB-ITR (Red Green Blue-ITR) scanner employs three wavelengths in the visible range for three-dimensional color digitalization up to 30 m, while the IR-ITR (Infrared-ITR) system allows for layering inspection using one IR source for analyses. The functionalities and operability of the two systems are presented by showing the results of several case studies and laboratory tests. Full article
(This article belongs to the Special Issue Image Enhancement, Modeling and Visualization)
Figures

Figure 1

Open AccessTechnical Note DIRT: The Dacus Image Recognition Toolkit
J. Imaging 2018, 4(11), 129; https://doi.org/10.3390/jimaging4110129
Received: 25 August 2018 / Revised: 25 October 2018 / Accepted: 26 October 2018 / Published: 30 October 2018
Viewed by 225 | PDF Full-text (6790 KB) | HTML Full-text | XML Full-text
Abstract
Modern agriculture is facing unique challenges in building a sustainable future for food production, in which the reliable detection of plantation threats is of critical importance. The breadth of existing information sources, and their equivalent sensors, can provide a wealth of data which,
[...] Read more.
Modern agriculture is facing unique challenges in building a sustainable future for food production, in which the reliable detection of plantation threats is of critical importance. The breadth of existing information sources, and their equivalent sensors, can provide a wealth of data which, to be useful, must be transformed into actionable knowledge. Approaches based on Information Communication Technologies (ICT) have been shown to be able to help farmers and related stakeholders make decisions on problems by examining large volumes of data while assessing multiple criteria. In this paper, we address the automated identification (and count the instances) of the major threat of olive trees and their fruit, the Bactrocera Oleae (a.k.a. Dacus) based on images of the commonly used McPhail trap’s contents. Accordingly, we introduce the “Dacus Image Recognition Toolkit” (DIRT), a collection of publicly available data, programming code samples and web-services focused at supporting research aiming at the management the Dacus as well as extensive experimentation on the capability of the proposed dataset in identifying Dacuses using Deep Learning methods. Experimental results indicated performance accuracy (mAP) of 91.52% in identifying Dacuses in trap images featuring various pests. Moreover, the results also indicated a trade-off between image attributes affecting detail, file size and complexity of approaches and mAP performance that can be selectively used to better tackle the needs of each usage scenario. Full article
(This article belongs to the Special Issue Image Based Information Retrieval from the Web)
Figures

Figure 1

Open AccessArticle Improving Tomographic Reconstruction from Limited Data Using Mixed-Scale Dense Convolutional Neural Networks
J. Imaging 2018, 4(11), 128; https://doi.org/10.3390/jimaging4110128
Received: 1 September 2018 / Revised: 25 September 2018 / Accepted: 10 October 2018 / Published: 30 October 2018
Viewed by 425 | PDF Full-text (49504 KB) | HTML Full-text | XML Full-text
Abstract
In many applications of tomography, the acquired data are limited in one or more ways due to unavoidable experimental constraints. In such cases, popular direct reconstruction algorithms tend to produce inaccurate images, and more accurate iterative algorithms often have prohibitively high computational costs.
[...] Read more.
In many applications of tomography, the acquired data are limited in one or more ways due to unavoidable experimental constraints. In such cases, popular direct reconstruction algorithms tend to produce inaccurate images, and more accurate iterative algorithms often have prohibitively high computational costs. Using machine learning to improve the image quality of direct algorithms is a recently proposed alternative, for which promising results have been shown. However, previous attempts have focused on using encoder–decoder networks, which have several disadvantages when applied to large tomographic images, preventing wide application in practice. Here, we propose the use of the Mixed-Scale Dense convolutional neural network architecture, which was specifically designed to avoid these disadvantages, to improve tomographic reconstruction from limited data. Results are shown for various types of data limitations and object types, for both simulated data and large-scale real-world experimental data. The results are compared with popular tomographic reconstruction algorithms and machine learning algorithms, showing that Mixed-Scale Dense networks are able to significantly improve reconstruction quality even with severely limited data, and produce more accurate results than existing algorithms. Full article
Figures

Figure 1

Open AccessArticle Green Stability Assumption: Unsupervised Learning for Statistics-Based Illumination Estimation
J. Imaging 2018, 4(11), 127; https://doi.org/10.3390/jimaging4110127
Received: 31 August 2018 / Revised: 4 October 2018 / Accepted: 26 October 2018 / Published: 29 October 2018
Viewed by 219 | PDF Full-text (1210 KB) | HTML Full-text | XML Full-text
Abstract
In the image processing pipeline of almost every digital camera, there is a part for removing the influence of illumination on the colors of the image scene. Tuning the parameter values of an illumination estimation method for maximal accuracy requires calibrated images with
[...] Read more.
In the image processing pipeline of almost every digital camera, there is a part for removing the influence of illumination on the colors of the image scene. Tuning the parameter values of an illumination estimation method for maximal accuracy requires calibrated images with known ground-truth illumination, but creating them for a given sensor is time-consuming. In this paper, the green stability assumption is proposed that can be used to fine-tune the values of some common illumination estimation methods by using only non-calibrated images. The obtained accuracy is practically the same as when training on calibrated images, but the whole process is much faster since calibration is not required and thus time is saved. The results are presented and discussed. The source code website is provided in Section Experimental Results. Full article
(This article belongs to the Special Issue Image Enhancement, Modeling and Visualization)
Figures

Figure 1

Open AccessArticle Personalized Shares in Visual Cryptography
J. Imaging 2018, 4(11), 126; https://doi.org/10.3390/jimaging4110126
Received: 13 August 2018 / Revised: 7 October 2018 / Accepted: 24 October 2018 / Published: 29 October 2018
Viewed by 264 | PDF Full-text (1437 KB) | HTML Full-text | XML Full-text
Abstract
This article deals with visual cryptography. It consists of hiding a message in two key images (also called shares). The decryption of the message is obtained through human vision by superposition of the shares. In existing methods, the surface of key images is
[...] Read more.
This article deals with visual cryptography. It consists of hiding a message in two key images (also called shares). The decryption of the message is obtained through human vision by superposition of the shares. In existing methods, the surface of key images is not visually pleasant and is not exploited for communicating textual or pictorial information. Presently, we propose a pictogram-based visual cryptography technique, which generates shares textured with customizable and aesthetic rendering. Moreover, robustness characteristics of this technique to the automated decoding of the secret message are presented. Experimental results show concrete personalized shares and their applicative potentials for security and creative domains. Full article
Figures

Figure 1

Open AccessArticle Image-Based Surrogates of Socio-Economic Status in Urban Neighborhoods Using Deep Multiple Instance Learning
J. Imaging 2018, 4(11), 125; https://doi.org/10.3390/jimaging4110125
Received: 7 August 2018 / Revised: 2 October 2018 / Accepted: 18 October 2018 / Published: 23 October 2018
Viewed by 354 | PDF Full-text (6839 KB) | HTML Full-text | XML Full-text
Abstract
(1) Background: Evidence-based policymaking requires data about the local population’s socioeconomic status (SES) at detailed geographical level, however, such information is often not available, or is too expensive to acquire. Researchers have proposed solutions to estimate SES indicators by analyzing Google Street View
[...] Read more.
(1) Background: Evidence-based policymaking requires data about the local population’s socioeconomic status (SES) at detailed geographical level, however, such information is often not available, or is too expensive to acquire. Researchers have proposed solutions to estimate SES indicators by analyzing Google Street View images, however, these methods are also resource-intensive, since they require large volumes of manually labeled training data. (2) Methods: We propose a methodology for automatically computing surrogate variables of SES indicators using street images of parked cars and deep multiple instance learning. Our approach does not require any manually created labels, apart from data already available by statistical authorities, while the entire pipeline for image acquisition, parked car detection, car classification, and surrogate variable computation is fully automated. The proposed surrogate variables are then used in linear regression models to estimate the target SES indicators. (3) Results: We implement and evaluate a model based on the proposed surrogate variable at 30 municipalities of varying SES in Greece. Our model has R 2 = 0.76 and a correlation coefficient of 0.874 with the true unemployment rate, while it achieves a mean absolute percentage error of 0.089 and mean absolute error of 1.87 on a held-out test set. Similar results are also obtained for other socioeconomic indicators, related to education level and occupational prestige. (4) Conclusions: The proposed methodology can be used to estimate SES indicators at the local level automatically, using images of parked cars detected via Google Street View, without the need for any manual labeling effort. Full article
(This article belongs to the Special Issue Image Based Information Retrieval from the Web)
Figures

Graphical abstract

Open AccessArticle Automated Curved and Multiplanar Reformation for Screening of the Proximal Coronary Arteries in MR Angiography
J. Imaging 2018, 4(11), 124; https://doi.org/10.3390/jimaging4110124
Received: 5 September 2018 / Revised: 15 October 2018 / Accepted: 18 October 2018 / Published: 23 October 2018
Viewed by 211 | PDF Full-text (1225 KB) | HTML Full-text | XML Full-text
Abstract
Congenital anomalies of the coronary ostia can lead to sudden death. A screening solution would be useful to prevent adverse outcomes for the affected individuals. To be considered for integration into clinical routine, such a procedure must meet strict constraints in terms of
[...] Read more.
Congenital anomalies of the coronary ostia can lead to sudden death. A screening solution would be useful to prevent adverse outcomes for the affected individuals. To be considered for integration into clinical routine, such a procedure must meet strict constraints in terms of invasiveness, time and user interaction. Imaging must be fast and seamlessly integrable into the clinical process. Non-contrast enhanced coronary magnetic resonance angiography (MRA) is well suited for this. Furthermore, planar reformations proved effective to reduce the acquired volumetric datasets to 2D images. These usually require time consuming user interaction, though. To fulfill the aforementioned challenges, we present a fully automated solution for imaging and reformatting of the proximal coronary arteries which enables rapid screening of these. The proposed pipeline consists of: (I) highly accelerated single breath-hold MRA data acquisition, (II) coronary ostia detection and vessel centerline extraction, and (III) curved planar reformation of the proximal coronary arteries, as well as multiplanar reformation of the coronary ostia. The procedure proved robust and effective in ten volunteer data sets. Imaging of the proximal coronary arteries took 24 ± 5 s and was successful within one breath-hold for all patients. The extracted centerlines achieve an overlap of 0.76 ± 0.18 compared to the reference standard and the average distance of the centerline points from the spherical surface for reformation was 1.1 ± 0.51 mm. The promising results encourage further experiments on patient data, particularly in coronary ostia anomaly screening. Full article
Figures

Figure 1

Back to Top