Next Issue
Previous Issue

Table of Contents

J. Imaging, Volume 4, Issue 9 (September 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) This paper presents an uncertainty-aware visual system for image pre-processing. First, [...] Read more.
View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:
Open AccessArticle An Uncertainty-Aware Visual System for Image Pre-Processing
J. Imaging 2018, 4(9), 109; https://doi.org/10.3390/jimaging4090109
Received: 13 August 2018 / Revised: 3 September 2018 / Accepted: 5 September 2018 / Published: 10 September 2018
PDF Full-text (8051 KB) | HTML Full-text | XML Full-text
Abstract
Due to image reconstruction process of all image capturing methods, image data is inherently affected by uncertainty. This is caused by the underlying image reconstruction model, that is not capable to map all physical properties in its entirety. In order to be aware
[...] Read more.
Due to image reconstruction process of all image capturing methods, image data is inherently affected by uncertainty. This is caused by the underlying image reconstruction model, that is not capable to map all physical properties in its entirety. In order to be aware of these effects, image uncertainty needs to be quantified and propagated along the entire image processing pipeline. In classical image processing methodologies, pre-processing algorithms do not consider this information. Therefore, this paper presents an uncertainty-aware image pre-processing paradigm, that is aware of the input image’s uncertainty and propagates it trough the entire pipeline. To accomplish this, we utilize rules for transformation and propagation of uncertainty to incorporate this additional information with a variety of operations. Resulting from this, we are able to adapt prominent image pre-processing algorithms such that they consider the input images uncertainty. Furthermore, we allow the composition of arbitrary image pre-processing pipelines and visually encode the accumulated uncertainty throughout this pipeline. The effectiveness of the demonstrated approach is shown by creating image pre-processing pipelines for a variety of real world datasets. Full article
(This article belongs to the Special Issue Image Enhancement, Modeling and Visualization)
Figures

Figure 1

Open AccessArticle Adaptive Multi-Scale Entropy Fusion De-Hazing Based on Fractional Order
J. Imaging 2018, 4(9), 108; https://doi.org/10.3390/jimaging4090108
Received: 21 July 2018 / Revised: 29 August 2018 / Accepted: 31 August 2018 / Published: 6 September 2018
PDF Full-text (10632 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes a proposed fractional filter-based multi-scale underwater and hazy image enhancement algorithm. The proposed system combines a modified global contrast operator with fractional order-based multi-scale filters used to generate several images, which are fused based on entropy and standard deviation. The
[...] Read more.
This paper describes a proposed fractional filter-based multi-scale underwater and hazy image enhancement algorithm. The proposed system combines a modified global contrast operator with fractional order-based multi-scale filters used to generate several images, which are fused based on entropy and standard deviation. The multi-scale-global enhancement technique enables fully adaptive and controlled color correction and contrast enhancement without over exposure of highlights when processing hazy and underwater images. This in addition to the illumination/reflectance estimation coupled with global and local contrast enhancement. The proposed algorithm is also compared with the most recent available state-of-the-art multi-scale fusion de-hazing algorithm. Experimental comparisons indicate that the proposed approach yields a better edge and contrast enhancement results without a halo effect, without color degradation, and is faster and more adaptive than all other algorithms from the literature. Full article
(This article belongs to the Special Issue Physics-based Computer Vision: Color and Photometry)
Figures

Figure 1

Open AccessArticle PedNet: A Spatio-Temporal Deep Convolutional Neural Network for Pedestrian Segmentation
J. Imaging 2018, 4(9), 107; https://doi.org/10.3390/jimaging4090107
Received: 13 July 2018 / Revised: 22 August 2018 / Accepted: 28 August 2018 / Published: 4 September 2018
PDF Full-text (26422 KB) | HTML Full-text | XML Full-text
Abstract
Articulation modeling, feature extraction, and classification are the important components of pedestrian segmentation. Usually, these components are modeled independently from each other and then combined in a sequential way. However, this approach is prone to poor segmentation if any individual component is weakly
[...] Read more.
Articulation modeling, feature extraction, and classification are the important components of pedestrian segmentation. Usually, these components are modeled independently from each other and then combined in a sequential way. However, this approach is prone to poor segmentation if any individual component is weakly designed. To cope with this problem, we proposed a spatio-temporal convolutional neural network named PedNet which exploits temporal information for spatial segmentation. The backbone of the PedNet consists of an encoder–decoder network for downsampling and upsampling the feature maps, respectively. The input to the network is a set of three frames and the output is a binary mask of the segmented regions in the middle frame. Irrespective of classical deep models where the convolution layers are followed by a fully connected layer for classification, PedNet is a Fully Convolutional Network (FCN). It is trained end-to-end and the segmentation is achieved without the need of any pre- or post-processing. The main characteristic of PedNet is its unique design where it performs segmentation on a frame-by-frame basis but it uses the temporal information from the previous and the future frame for segmenting the pedestrian in the current frame. Moreover, to combine the low-level features with the high-level semantic information learned by the deeper layers, we used long-skip connections from the encoder to decoder network and concatenate the output of low-level layers with the higher level layers. This approach helps to get segmentation map with sharp boundaries. To show the potential benefits of temporal information, we also visualized different layers of the network. The visualization showed that the network learned different information from the consecutive frames and then combined the information optimally to segment the middle frame. We evaluated our approach on eight challenging datasets where humans are involved in different activities with severe articulation (football, road crossing, surveillance). The most common CamVid dataset which is used for calculating the performance of the segmentation algorithm is evaluated against seven state-of-the-art methods. The performance is shown on precision/recall, F 1 , F 2 , and mIoU. The qualitative and quantitative results show that PedNet achieves promising results against state-of-the-art methods with substantial improvement in terms of all the performance metrics. Full article
Figures

Figure 1

Open AccessArticle Viewing Experience Model of First-Person Videos
J. Imaging 2018, 4(9), 106; https://doi.org/10.3390/jimaging4090106
Received: 28 May 2018 / Revised: 11 August 2018 / Accepted: 27 August 2018 / Published: 31 August 2018
PDF Full-text (1422 KB) | HTML Full-text | XML Full-text
Abstract
First-Person Videos (FPVs) are recorded using wearable cameras to share the recorder’s First-Person Experience (FPE). Ideally, the FPE is conveyed by the viewing experience of the FPV. However, raw FPVs are usually too shaky to watch, which ruins the viewing experience. To solve
[...] Read more.
First-Person Videos (FPVs) are recorded using wearable cameras to share the recorder’s First-Person Experience (FPE). Ideally, the FPE is conveyed by the viewing experience of the FPV. However, raw FPVs are usually too shaky to watch, which ruins the viewing experience. To solve this problem, we improve the viewing experience of FPVs by modeling it as two parts: video stability and First-Person Motion Information (FPMI). Existing video stabilization techniques can improve the video stability but damage the FPMI. We propose a Viewing Experience (VE) score, which measures both the stability and the FPMI of a FPV by exploring the mechanism of human perception. This enables us to further develop a system that can stabilize FPVs while preserving their FPMI so that the viewing experience of FPVs is improved. Objective tests show that our measurement is robust under different kinds of noise, and our system has competitive performance relative to current stabilization techniques. Subjective tests show that (1) both our stability and FPMI measurements can correctly compare the corresponding attributes of an FPV across different versions of the same content, and (2) our video processing system can effectively improve the viewing experience of FPVs. Full article
(This article belongs to the Special Issue Image Quality)
Figures

Figure 1

Open AccessArticle Developing Forest Cover Composites through a Combination of Landsat-8 Optical and Sentinel-1 SAR Data for the Visualization and Extraction of Forested Areas
J. Imaging 2018, 4(9), 105; https://doi.org/10.3390/jimaging4090105
Received: 25 June 2018 / Revised: 21 August 2018 / Accepted: 23 August 2018 / Published: 26 August 2018
PDF Full-text (1876 KB) | HTML Full-text | XML Full-text
Abstract
Mapping the distribution of forested areas and monitoring their spatio-temporal changes are necessary for the conservation and management of forests. This paper presents two new image composites for the visualization and extraction of forest cover. By exploiting the Landsat-8 satellite-based multi-temporal and multi-spectral
[...] Read more.
Mapping the distribution of forested areas and monitoring their spatio-temporal changes are necessary for the conservation and management of forests. This paper presents two new image composites for the visualization and extraction of forest cover. By exploiting the Landsat-8 satellite-based multi-temporal and multi-spectral reflectance datasets, the Forest Cover Composite (FCC) was designed in this research. The FCC is an RGB (red, green, blue) color composite made up of short-wave infrared reflectance and green reflectance, specially selected from the day when the Normalized Difference Vegetation Index (NDVI) is at a maximum, as the red and blue bands, respectively. The annual mean NDVI values are used as the green band. The FCC is designed in such a way that the forested areas appear greener than other vegetation types, such as grasses and shrubs. On the other hand, the croplands and barren lands are usually seen as red and water/snow is seen as blue. However, forests may not necessarily be greener than other perennial vegetation. To cope with this problem, an Enhanced Forest Cover Composite (EFCC) was designed by combining the annual median backscattering intensity of the VH (vertical transmit, horizontal receive) polarization data from the Sentinel-1 satellite with the green term of the FCC to suppress the green component (mean NDVI values) of the FCC over the non-forested vegetative areas. The performances of the FCC and EFCC were evaluated for the discrimination and classification of forested areas all over Japan with the support of reference data. The FCC and EFCC provided promising results, and the high-resolution forest map newly produced in the research provided better accuracy than the extant MODIS (Moderate Resolution Imaging Spectroradiometer) Land Cover Type product (MCD12Q1) in Japan. The composite images proposed in the research are expected to improve forest monitoring activities in other regions as well. Full article
Figures

Figure 1

Open AccessArticle GPU Accelerated Image Processing in CCD-Based Neutron Imaging
J. Imaging 2018, 4(9), 104; https://doi.org/10.3390/jimaging4090104
Received: 17 July 2018 / Revised: 9 August 2018 / Accepted: 9 August 2018 / Published: 21 August 2018
PDF Full-text (2960 KB) | HTML Full-text | XML Full-text
Abstract
Image processing is an important step in every imaging path in the scientific community. Especially in neutron imaging, image processing is very important to correct for image artefacts that arise from low light and high noise statistics. Due to the low global availability
[...] Read more.
Image processing is an important step in every imaging path in the scientific community. Especially in neutron imaging, image processing is very important to correct for image artefacts that arise from low light and high noise statistics. Due to the low global availability of neutron sources suitable for imaging, the development of these algorithms is not in the main scope of research work and once established, algorithms are not revisited for a long time and therefore not optimized for high throughput. This work shows the possible speed gain that arises from the usage of heterogeneous computing platforms in image processing along the example of an established adaptive noise reduction algorithm. Full article
Figures

Figure 1

Back to Top