Previous Issue

Table of Contents

J. Imaging, Volume 4, Issue 8 (August 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:
Open AccessArticle Synchrotron and Neutron Tomography of Paleontological Objects on the Facilities of the Kurchatov Institute
J. Imaging 2018, 4(8), 103; https://doi.org/10.3390/jimaging4080103
Received: 28 June 2018 / Revised: 5 August 2018 / Accepted: 13 August 2018 / Published: 15 August 2018
PDF Full-text (1873 KB) | HTML Full-text | XML Full-text
Abstract
The most important results of tomographic studies of paleontological objects on the facilities of the National Research Centre “Kurchatov Institute” are described. It is shown that the use of the synchrotron and neutron tomography makes it possible to obtain new information on the
[...] Read more.
The most important results of tomographic studies of paleontological objects on the facilities of the National Research Centre “Kurchatov Institute” are described. It is shown that the use of the synchrotron and neutron tomography makes it possible to obtain new information on the structure of fossil animals, which is of fundamental importance for taxonomy and morphological analysis of extinct fauna. Full article
Figures

Figure 1

Open AccessArticle Airborne Optical Sectioning
J. Imaging 2018, 4(8), 102; https://doi.org/10.3390/jimaging4080102
Received: 4 July 2018 / Revised: 2 August 2018 / Accepted: 11 August 2018 / Published: 13 August 2018
PDF Full-text (7671 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Drones are becoming increasingly popular for remote sensing of landscapes in archeology, cultural heritage, forestry, and other disciplines. They are more efficient than airplanes for capturing small areas, of up to several hundred square meters. LiDAR (light detection and ranging) and photogrammetry have
[...] Read more.
Drones are becoming increasingly popular for remote sensing of landscapes in archeology, cultural heritage, forestry, and other disciplines. They are more efficient than airplanes for capturing small areas, of up to several hundred square meters. LiDAR (light detection and ranging) and photogrammetry have been applied together with drones to achieve 3D reconstruction. With airborne optical sectioning (AOS), we present a radically different approach that is based on an old idea: synthetic aperture imaging. Rather than measuring, computing, and rendering 3D point clouds or triangulated 3D meshes, we apply image-based rendering for 3D visualization. In contrast to photogrammetry, AOS does not suffer from inaccurate correspondence matches and long processing times. It is cheaper than LiDAR, delivers surface color information, and has the potential to achieve high sampling resolutions. AOS samples the optical signal of wide synthetic apertures (30–100 m diameter) with unstructured video images recorded from a low-cost camera drone to support optical sectioning by image integration. The wide aperture signal results in a shallow depth of field and consequently in a strong blur of out-of-focus occluders, while images of points in focus remain clearly visible. Shifting focus computationally towards the ground allows optical slicing through dense occluder structures (such as leaves, tree branches, and coniferous trees), and discovery and inspection of concealed artifacts on the surface. Full article
(This article belongs to the Special Issue New Trends in Image Processing for Cultural Heritage)
Figures

Figure 1

Open AccessArticle User-Centered Predictive Model for Improving Cultural Heritage Augmented Reality Applications: An HMM-Based Approach for Eye-Tracking Data
J. Imaging 2018, 4(8), 101; https://doi.org/10.3390/jimaging4080101
Received: 25 June 2018 / Revised: 27 July 2018 / Accepted: 1 August 2018 / Published: 6 August 2018
PDF Full-text (9409 KB) | HTML Full-text | XML Full-text
Abstract
Today, museum visits are perceived as an opportunity for individuals to explore and make up their own minds. The increasing technical capabilities of Augmented Reality (AR) technology have raised audience expectations, advancing the use of mobile AR in cultural heritage (CH) settings. Hence,
[...] Read more.
Today, museum visits are perceived as an opportunity for individuals to explore and make up their own minds. The increasing technical capabilities of Augmented Reality (AR) technology have raised audience expectations, advancing the use of mobile AR in cultural heritage (CH) settings. Hence, there is the need to define a criteria, based on users’ preference, able to drive developers and insiders toward a more conscious development of AR-based applications. Starting from previous research (performed to define a protocol for understanding the visual behaviour of subjects looking at paintings), this paper introduces a truly predictive model of the museum visitor’s visual behaviour, measured by an eye tracker. A Hidden Markov Model (HMM) approach is presented, able to predict users’ attention in front of a painting. Furthermore, this research compares users’ behaviour between adults and children, expanding the results to different kind of users, thus providing a reliable approach to eye trajectories. Tests have been conducted defining areas of interest (AOI) and observing the most visited ones, attempting the prediction of subsequent transitions between AOIs. The results demonstrate the effectiveness and suitability of our approach, with performance evaluation values that exceed 90%. Full article
(This article belongs to the Special Issue Multimedia Content Analysis and Applications)
Figures

Figure 1

Open AccessArticle Use of an Occlusion Mask for Veiling Glare Removal in HDR Images
J. Imaging 2018, 4(8), 100; https://doi.org/10.3390/jimaging4080100
Received: 17 May 2018 / Revised: 27 July 2018 / Accepted: 30 July 2018 / Published: 3 August 2018
PDF Full-text (33273 KB) | HTML Full-text | XML Full-text
Abstract
Optical systems in digital cameras present a limit during the acquisition of standard and High Dynamic Range Images (HDRI) due to the presence of veiling glare, an artifact caused by an unwanted spread of the source of light. In this paper, we analyze
[...] Read more.
Optical systems in digital cameras present a limit during the acquisition of standard and High Dynamic Range Images (HDRI) due to the presence of veiling glare, an artifact caused by an unwanted spread of the source of light. In this paper, we analyze the state-of-the-art of veiling glare removal in HDRI, giving attention to the paper presented by Talvala. Then we describe an algorithm for veiling glare removal based on the same occlusion mask, to study the benefits provided by it in HDRI acquisition process. Finally, we demonstrate the efficiency of the occlusion mask method in veiling glare removal without any post production estimation and subtraction. Full article
(This article belongs to the Special Issue Image Enhancement, Modeling and Visualization)
Figures

Figure 1

Open AccessArticle Long-Term Monitoring of Crack Patterns in Historic Structures Using UAVs and Planar Markers: A Preliminary Study
J. Imaging 2018, 4(8), 99; https://doi.org/10.3390/jimaging4080099
Received: 30 June 2018 / Revised: 25 July 2018 / Accepted: 26 July 2018 / Published: 2 August 2018
PDF Full-text (2424 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes how Unmanned Aerial Vehicles (UAVs) may support the long-term monitoring of crack patterns in the context of architectural heritage preservation. In detail, this work includes: (i) a state of the art about the most used techniques in ancient structural monitoring;
[...] Read more.
This paper describes how Unmanned Aerial Vehicles (UAVs) may support the long-term monitoring of crack patterns in the context of architectural heritage preservation. In detail, this work includes: (i) a state of the art about the most used techniques in ancient structural monitoring; (ii) the description of the implemented methods, taking into account the requirements and constraints of the case study; (iii) the results of the experimentation carried out in the lab; and (iv) conclusions and future works. Full article
(This article belongs to the Special Issue New Trends in Image Processing for Cultural Heritage)
Figures

Figure 1

Open AccessArticle Evaluating the Performance of Structure from Motion Pipelines
J. Imaging 2018, 4(8), 98; https://doi.org/10.3390/jimaging4080098
Received: 26 June 2018 / Revised: 18 July 2018 / Accepted: 27 July 2018 / Published: 1 August 2018
PDF Full-text (13685 KB) | HTML Full-text | XML Full-text
Abstract
Structure from Motion (SfM) is a pipeline that allows three-dimensional reconstruction starting from a collection of images. A typical SfM pipeline comprises different processing steps each of which tackles a different problem in the reconstruction pipeline. Each step can exploit different algorithms to
[...] Read more.
Structure from Motion (SfM) is a pipeline that allows three-dimensional reconstruction starting from a collection of images. A typical SfM pipeline comprises different processing steps each of which tackles a different problem in the reconstruction pipeline. Each step can exploit different algorithms to solve the problem at hand and thus many different SfM pipelines can be built. How to choose the SfM pipeline best suited for a given task is an important question. In this paper we report a comparison of different state-of-the-art SfM pipelines in terms of their ability to reconstruct different scenes. We also propose an evaluation procedure that stresses the SfM pipelines using real dataset acquired with high-end devices as well as realistic synthetic dataset. To this end, we created a plug-in module for the Blender software to support the creation of synthetic datasets and the evaluation of the SfM pipeline. The use of synthetic data allows us to easily have arbitrarily large and diverse datasets with, in theory, infinitely precise ground truth. Our evaluation procedure considers both the reconstruction errors as well as the estimation errors of the camera poses used in the reconstruction. Full article
(This article belongs to the Special Issue Image Enhancement, Modeling and Visualization)
Figures

Figure 1

Open AccessArticle A Comparative Study of Two State-of-the-Art Feature Selection Algorithms for Texture-Based Pixel-Labeling Task of Ancient Documents
J. Imaging 2018, 4(8), 97; https://doi.org/10.3390/jimaging4080097
Received: 28 June 2018 / Revised: 21 July 2018 / Accepted: 25 July 2018 / Published: 1 August 2018
PDF Full-text (13302 KB) | HTML Full-text | XML Full-text
Abstract
Recently, texture features have been widely used for historical document image analysis. However, few studies have focused exclusively on feature selection algorithms for historical document image analysis. Indeed, an important need has emerged to use a feature selection algorithm in data mining and
[...] Read more.
Recently, texture features have been widely used for historical document image analysis. However, few studies have focused exclusively on feature selection algorithms for historical document image analysis. Indeed, an important need has emerged to use a feature selection algorithm in data mining and machine learning tasks, since it helps to reduce the data dimensionality and to increase the algorithm performance such as a pixel classification algorithm. Therefore, in this paper we propose a comparative study of two conventional feature selection algorithms, genetic algorithm and ReliefF algorithm, using a classical pixel-labeling scheme based on analyzing and selecting texture features. The two assessed feature selection algorithms in this study have been applied on a training set of the HBR dataset in order to deduce the most selected texture features of each analyzed texture-based feature set. The evaluated feature sets in this study consist of numerous state-of-the-art texture features (Tamura, local binary patterns, gray-level run-length matrix, auto-correlation function, gray-level co-occurrence matrix, Gabor filters, Three-level Haar wavelet transform, three-level wavelet transform using 3-tap Daubechies filter and three-level wavelet transform using 4-tap Daubechies filter). In our experiments, a public corpus of historical document images provided in the context of the historical book recognition contest (HBR2013 dataset: PRImA, Salford, UK) has been used. Qualitative and numerical experiments are given in this study in order to provide a set of comprehensive guidelines on the strengths and the weaknesses of each assessed feature selection algorithm according to the used texture feature set. Full article
(This article belongs to the Special Issue New Trends in Image Processing for Cultural Heritage)
Figures

Figure 1

Open AccessArticle Measuring the Spatial Noise of a Low-Cost Eye Tracker to Enhance Fixation Detection
J. Imaging 2018, 4(8), 96; https://doi.org/10.3390/jimaging4080096
Received: 1 June 2018 / Revised: 5 July 2018 / Accepted: 25 July 2018 / Published: 28 July 2018
PDF Full-text (3363 KB) | HTML Full-text | XML Full-text
Abstract
The present study evaluates the quality of gaze data produced by a low-cost eye tracker (The Eye Tribe©, The Eye Tribe, Copenhagen, Denmark) in order to verify its suitability for the performance of scientific research. An integrated methodological framework, based on
[...] Read more.
The present study evaluates the quality of gaze data produced by a low-cost eye tracker (The Eye Tribe©, The Eye Tribe, Copenhagen, Denmark) in order to verify its suitability for the performance of scientific research. An integrated methodological framework, based on artificial eye measurements and human eye tracking data, is proposed towards the implementation of the experimental process. The obtained results are used to remove the modeled noise through manual filtering and when detecting samples (fixations). The outcomes aim to serve as a robust reference for the verification of the validity of low-cost solutions, as well as a guide for the selection of appropriate fixation parameters towards the analysis of experimental data based on the used low-cost device. The results show higher deviation values for the real test persons in comparison to the artificial eyes, but these are still acceptable to be used in a scientific setting. Full article
Figures

Figure 1

Back to Top