Special Issue "Computer Vision and Pattern Recognition"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 September 2017)

Special Issue Editor

Guest Editor
Prof. Dr. Cosimo Distante

Institute of Applied Sciences and Intelligent Systems “ScienceApp", Consiglio Nazionale delle Ricerche, c/o Dhitech Campus Universitario Ecotekne, via Monteroni sn 73100 Lecce, Italy
Website | E-Mail
Interests: computer vision; pattern recognition; video surveillance; object tracking; deep learning; audience measurements; visual interaction; human robot interaction

Special Issue Information

Dear Colleagues,

In recent years, computer vision and pattern recognition have received a great deal of attention and on a wide range of topics, in order to extract structures or answers from video and image data, both spatially and temporally. Building mathematical models to data to describe relevant patterns to be localized and recognized.

The intent of this Special Issue is to collect the experiences of leading scientists, but also to serve as an assessment tool for people who are new to the world of computer vision and pattern recognition

This Special Issue is intended to covering the following topics, but is not limited to them:

  • Deep learning techniques for object detection and classification
  • Human behavior analysis
  • Video surveillance and technologies homeland security
  • Medical imaging
  • Nondestructive testing
  • Visual question answering
  • Human/computer and human/robot interaction
  • Robot vision
  • Assistive computer vision technologies

Prof. Dr. Cosimo Distante
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Shape
  • Motion
  • Range
  • Matching and recogntion
  • Feature extraction
  • Vision systems

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle An Overview of Deep Learning Based Methods for Unsupervised and Semi-Supervised Anomaly Detection in Videos
J. Imaging 2018, 4(2), 36; https://doi.org/10.3390/jimaging4020036
Received: 20 November 2017 / Revised: 29 January 2018 / Accepted: 1 February 2018 / Published: 7 February 2018
Cited by 1 | PDF Full-text (1495 KB) | HTML Full-text | XML Full-text
Abstract
Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly
[...] Read more.
Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Figures

Figure 1

Open AccessArticle Partition and Inclusion Hierarchies of Images: A Comprehensive Survey
J. Imaging 2018, 4(2), 33; https://doi.org/10.3390/jimaging4020033
Received: 3 December 2017 / Revised: 22 January 2018 / Accepted: 25 January 2018 / Published: 1 February 2018
Cited by 1 | PDF Full-text (810 KB) | HTML Full-text | XML Full-text
Abstract
The theory of hierarchical image representations has been well-established in Mathematical Morphology, and provides a suitable framework to handle images through objects or regions taking into account their scale. Such approaches have increased in popularity and been favourably compared to treating individual image
[...] Read more.
The theory of hierarchical image representations has been well-established in Mathematical Morphology, and provides a suitable framework to handle images through objects or regions taking into account their scale. Such approaches have increased in popularity and been favourably compared to treating individual image elements in various domains and applications. This survey paper presents the development of hierarchical image representations over the last 20 years using the framework of component trees. We introduce two classes of component trees, partitioning and inclusion trees, and describe their general characteristics and differences. Examples of hierarchies for each of the classes are compared, with the resulting study aiming to serve as a guideline when choosing a hierarchical image representation for any application and image domain. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Figures

Figure 1

Open AccessArticle Baseline Fusion for Image and Pattern Recognition—What Not to Do (and How to Do Better)
J. Imaging 2017, 3(4), 44; https://doi.org/10.3390/jimaging3040044
Received: 18 July 2017 / Revised: 30 September 2017 / Accepted: 2 October 2017 / Published: 11 October 2017
PDF Full-text (1554 KB) | HTML Full-text | XML Full-text
Abstract
The ever-increasing demand for a reliable inference capable of handling unpredictable challenges of practical application in the real world has made research on information fusion of major importance; indeed, this challenge is pervasive in a whole range of image understanding tasks. In the
[...] Read more.
The ever-increasing demand for a reliable inference capable of handling unpredictable challenges of practical application in the real world has made research on information fusion of major importance; indeed, this challenge is pervasive in a whole range of image understanding tasks. In the development of the most common type—score-level fusion algorithms—it is virtually universally desirable to have as a reference starting point a simple and universally sound baseline benchmark which newly developed approaches can be compared to. One of the most pervasively used methods is that of weighted linear fusion. It has cemented itself as the default off-the-shelf baseline owing to its simplicity of implementation, interpretability, and surprisingly competitive performance across a widest range of application domains and information source types. In this paper I argue that despite this track record, weighted linear fusion is not a good baseline on the grounds that there is an equally simple and interpretable alternative—namely quadratic mean-based fusion—which is theoretically more principled and which is more successful in practice. I argue the former from first principles and demonstrate the latter using a series of experiments on a diverse set of fusion problems: classification using synthetically generated data, computer vision-based object recognition, arrhythmia detection, and fatality prediction in motor vehicle accidents. On all of the aforementioned problems and in all instances, the proposed fusion approach exhibits superior performance over linear fusion, often increasing class separation by several orders of magnitude. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Figures

Figure 1

Open AccessArticle Enhancing Face Identification Using Local Binary Patterns and K-Nearest Neighbors
J. Imaging 2017, 3(3), 37; https://doi.org/10.3390/jimaging3030037
Received: 21 March 2017 / Revised: 28 August 2017 / Accepted: 29 August 2017 / Published: 5 September 2017
PDF Full-text (534 KB) | HTML Full-text | XML Full-text
Abstract
The human face plays an important role in our social interaction, conveying people’s identity. Using the human face as a key to security, biometric passwords technology has received significant attention in the past several years due to its potential for a wide variety
[...] Read more.
The human face plays an important role in our social interaction, conveying people’s identity. Using the human face as a key to security, biometric passwords technology has received significant attention in the past several years due to its potential for a wide variety of applications. Faces can have many variations in appearance (aging, facial expression, illumination, inaccurate alignment and pose) which continue to cause poor ability to recognize identity. The purpose of our research work is to provide an approach that contributes to resolve face identification issues with large variations of parameters such as pose, illumination, and expression. For provable outcomes, we combined two algorithms: (a) robustness local binary pattern (LBP), used for facial feature extractions; (b) k-nearest neighbor (K-NN) for image classifications. Our experiment has been conducted on the CMU PIE (Carnegie Mellon University Pose, Illumination, and Expression) face database and the LFW (Labeled Faces in the Wild) dataset. The proposed identification system shows higher performance, and also provides successful face similarity measures focus on feature extractions. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Figures

Figure 1

Open AccessArticle Pattern Reconstructability in Fully Parallel Thinning
J. Imaging 2017, 3(3), 29; https://doi.org/10.3390/jimaging3030029
Received: 16 June 2017 / Revised: 13 July 2017 / Accepted: 15 July 2017 / Published: 19 July 2017
Cited by 1 | PDF Full-text (29502 KB) | HTML Full-text | XML Full-text
Abstract
It is a challenging topic to perform pattern reconstruction from a unit-width skeleton, which is obtained by a parallel thinning algorithm. The bias skeleton yielded by a fully-parallel thinning algorithm, which usually results from the so-called hidden deletable points, will result in the
[...] Read more.
It is a challenging topic to perform pattern reconstruction from a unit-width skeleton, which is obtained by a parallel thinning algorithm. The bias skeleton yielded by a fully-parallel thinning algorithm, which usually results from the so-called hidden deletable points, will result in the difficulty of pattern reconstruction. In order to make a fully-parallel thinning algorithm pattern reconstructable, a newly-defined reconstructable skeletal pixel (RSP) including a thinning flag, iteration count, as well as reconstructable structure is proposed and applied for thinning iteration to obtain a skeleton table representing the resultant thin line. Based on the iteration count and reconstructable structure associated with each skeletal pixel in the skeleton table, the pattern can be reconstructed by means of the dilating and uniting operations. Embedding a conventional fully-parallel thinning algorithm into the proposed approach, the pattern may be over-reconstructed due to the influence of a biased skeleton. A simple process of removing hidden deletable points (RHDP) in the thinning iteration is thus presented to reduce the effect of the biased skeleton. Three well-known fully-parallel thinning algorithms are used for experiments. The performances investigated by the measurement of reconstructability (MR), the number of iterations (NI), as well as the measurement of skeleton deviation (MSD) confirm the feasibility of the proposed pattern reconstruction approach with the assistance of the RHDP process. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Figures

Figure 1

Open AccessArticle A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens
J. Imaging 2017, 3(2), 19; https://doi.org/10.3390/jimaging3020019
Received: 27 February 2017 / Revised: 6 April 2017 / Accepted: 31 May 2017 / Published: 3 June 2017
Cited by 2 | PDF Full-text (11384 KB) | HTML Full-text | XML Full-text
Abstract
The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies
[...] Read more.
The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system—which is currently installed in a shopping mall in Spain—has been tested by different users. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Figures

Graphical abstract

Back to Top