Next Issue
Volume 5, November
Previous Issue
Volume 5, September

Table of Contents

J. Imaging, Volume 5, Issue 10 (October 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Underwater images captured at deep water sites where artificial lighting is employed often possess [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Deep Learning of Fuzzy Weighted Multi-Resolution Depth Motion Maps with Spatial Feature Fusion for Action Recognition
J. Imaging 2019, 5(10), 82; https://doi.org/10.3390/jimaging5100082 - 21 Oct 2019
Viewed by 129
Abstract
Human action recognition (HAR) is an important yet challenging task. This paper presents a novel method. First, fuzzy weight functions are used in computations of depth motion maps (DMMs). Multiple length motion information is also used. These features are referred to as fuzzy [...] Read more.
Human action recognition (HAR) is an important yet challenging task. This paper presents a novel method. First, fuzzy weight functions are used in computations of depth motion maps (DMMs). Multiple length motion information is also used. These features are referred to as fuzzy weighted multi-resolution DMMs (FWMDMMs). This formulation allows for various aspects of individual actions to be emphasized. It also helps to characterise the importance of the temporal dimension. This is important to help overcome, e.g., variations in time over which a single type of action might be performed. A deep convolutional neural network (CNN) motion model is created and trained to extract discriminative and compact features. Transfer learning is also used to extract spatial information from RGB and depth data using the AlexNet network. Different late fusion techniques are then investigated to fuse the deep motion model with the spatial network. The result is a spatial temporal HAR model. The developed approach is capable of recognising both human action and human–object interaction. Three public domain datasets are used to evaluate the proposed solution. The experimental results demonstrate the robustness of this approach compared with state-of-the art algorithms. Full article
Show Figures

Graphical abstract

Open AccessArticle
Automatic Inspection of Aeronautical Mechanical Assemblies by Matching the 3D CAD Model and Real 2D Images
J. Imaging 2019, 5(10), 81; https://doi.org/10.3390/jimaging5100081 - 19 Oct 2019
Viewed by 241
Abstract
In the aviation industry, automated inspection is essential for ensuring quality of production. It allows acceleration of procedures for quality control of parts or mechanical assemblies. As a result, the demand of intelligent visual inspection systems aimed at ensuring high quality in production [...] Read more.
In the aviation industry, automated inspection is essential for ensuring quality of production. It allows acceleration of procedures for quality control of parts or mechanical assemblies. As a result, the demand of intelligent visual inspection systems aimed at ensuring high quality in production lines is increasing. In this work, we address a very common problem in quality control. The problem is verification of presence of the correct part and verification of its position. We address the problem in two parts: first, automatic selection of informative viewpoints before the inspection process is started (offline preparation of the inspection) and, second, automatic treatment of the acquired images from said viewpoints by matching them with information in 3D CAD models is launched. We apply this inspection system for detecting defects on aeronautical mechanical assemblies with the aim of checking whether all the subparts are present and correctly mounted. The system can be used during manufacturing or maintenance operations. The accuracy of the system is evaluated on two kinds of platform. One is an autonomous navigation robot, and the other one is a handheld tablet. The experimental results show that our proposed approach is accurate and promising for industrial applications with possibility for real-time inspection. Full article
Show Figures

Figure 1

Open AccessArticle
Tip Crack Imaging on Transparent Materials by Digital Holographic Microscopy
J. Imaging 2019, 5(10), 80; https://doi.org/10.3390/jimaging5100080 - 01 Oct 2019
Viewed by 183
Abstract
With this study, we propose a method to image the tip crack on transparent materials by using digital holographic microscopy. More specifically, an optical system based on Mach–Zehnder interference along with an inverted microscopy (Olympus CKX53) was used to image the tip crack [...] Read more.
With this study, we propose a method to image the tip crack on transparent materials by using digital holographic microscopy. More specifically, an optical system based on Mach–Zehnder interference along with an inverted microscopy (Olympus CKX53) was used to image the tip crack of Dammar Varnish transparent material under thermal excitation. A series of holograms were captured and reconstructed for the observation of the changes of the tip crack. The reconstructed holograms were also compared temporally to compute the temporal changes, showing the crack propagation phenomena. Results show that the Dammar Varnish is sensitive to the ambient temperature. Our research demonstrates that digital holographic microscopy is a promising technique for the detection of the fine tip crack and propagation in transparent materials. Full article
Show Figures

Figure 1

Open AccessArticle
A Contrast-Guided Approach for the Enhancement of Low-Lighting Underwater Images
J. Imaging 2019, 5(10), 79; https://doi.org/10.3390/jimaging5100079 - 01 Oct 2019
Viewed by 178
Abstract
Underwater images are often acquired in sub-optimal lighting conditions, in particular at profound depths where the absence of natural light demands the use of artificial lighting. Low-lighting images impose a challenge for both manual and automated analysis, since regions of interest can have [...] Read more.
Underwater images are often acquired in sub-optimal lighting conditions, in particular at profound depths where the absence of natural light demands the use of artificial lighting. Low-lighting images impose a challenge for both manual and automated analysis, since regions of interest can have low visibility. A new framework capable of significantly enhancing these images is proposed in this article. The framework is based on a novel dehazing mechanism that considers local contrast information in the input images, and offers a solution to three common disadvantages of current single image dehazing methods: oversaturation of radiance, lack of scale-invariance and creation of halos. A novel low-lighting underwater image dataset, OceanDark, is introduced to assist in the development and evaluation of the proposed framework. Experimental results and a comparison with other underwater-specific image enhancement methods show that the proposed framework can be used for significantly improving the visibility in low-lighting underwater images of different scales, without creating undesired dehazing artifacts. Full article
Show Figures

Graphical abstract

Open AccessArticle
Overview and Empirical Analysis of ISP Parameter Tuning for Visual Perception in Autonomous Driving
J. Imaging 2019, 5(10), 78; https://doi.org/10.3390/jimaging5100078 - 24 Sep 2019
Viewed by 676
Abstract
Image quality is a well understood concept for human viewing applications, particularly in the multimedia space, but increasingly in an automotive context as well. The rise in prominence of autonomous driving and computer vision brings to the fore research in the area of [...] Read more.
Image quality is a well understood concept for human viewing applications, particularly in the multimedia space, but increasingly in an automotive context as well. The rise in prominence of autonomous driving and computer vision brings to the fore research in the area of the impact of image quality in camera perception for tasks such as recognition, localization and reconstruction. While the definition of “image quality” for computer vision may be ill-defined, what is clear is that the configuration of the image signal processing pipeline is the key factor in controlling the image quality for computer vision. This paper is partly review and partly positional with demonstration of several preliminary results promising for future research. As such, we give an overview of what is an Full article
Show Figures

Figure 1

Open AccessArticle
Shape Similarity Measurement for Known-Object Localization: A New Normalized Assessment
J. Imaging 2019, 5(10), 77; https://doi.org/10.3390/jimaging5100077 - 23 Sep 2019
Viewed by 199
Abstract
This paper presents a new, normalized measure for assessing a contour-based object pose. Regarding binary images, the algorithm enables supervised assessment of known-object recognition and localization. A performance measure is computed to quantify differences between a reference edge map and a candidate image. [...] Read more.
This paper presents a new, normalized measure for assessing a contour-based object pose. Regarding binary images, the algorithm enables supervised assessment of known-object recognition and localization. A performance measure is computed to quantify differences between a reference edge map and a candidate image. Normalization is appropriate for interpreting the result of the pose assessment. Furthermore, the new measure is well motivated by highlighting the limitations of existing metrics to the main shape variations (translation, rotation, and scaling), by showing how the proposed measure is more robust to them. Indeed, this measure can determine to what extent an object shape differs from a desired position. In comparison with 6 other approaches, experiments performed on real images at different sizes/scales demonstrate the suitability of the new method for object-pose or shape-matching estimation. Full article
(This article belongs to the Special Issue Soft Computing for Edge Detection)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop