Special Issue "Deep Image Semantic Segmentation and Recognition"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 November 2020.

Special Issue Editors

Prof. Dr. Aleš Jaklič
Website
Guest Editor
Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
Interests: computer vision
Prof. Dr. Peter Peer
Website
Guest Editor
Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
Interests: biometrics, computer vision
Prof. Dr. Radim Burget
Website
Guest Editor
Department of Telecommunications, Brno University of Technology, Brno, Czech Republic
Interests: big data; deep learning; computer vision
Prof. Dr. Fran Bellas
Website
Guest Editor
CITIC research center, University of A Coruña, Spain
Interests: robotics, cognitive robotics, evolutionary robotics, educational robotics, computer vision

Special Issue Information

Dear Colleagues,

Recent advances in hardware development and deep neural network architectures on top of the availability of big image databases spurred many new research directions in the field of computer vision, detection, segmentation, semantics extraction and recognition. Motivation for these research efforts stems from various practical applications ranging from autonomous driving to robotics in agriculture, from medical image analysis and biometrics to geosensing, and many more application areas that will benefit from significant improvement in performance of segmentation and recognition algorithms based on deep neural networks. The aim of this special issue is to gather state of the art research to provide practitioners with broad overview of suitable deep neural network architectures and applications areas with objective performance metrices. We welcome well structured manuscripts with nicely illustrated background and novelty. We also recommend to authors to make the source code, databases, models and architectures publicly available, and to submit multimedia with each manuscript as it significantly increases the visibility and citations of publications.

Prof. Dr. Aleš Jaklič,
Prof. Dr. Peter Peer,
Prof. Dr. Radim Burget,
Prof. Dr. Fran Bellas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • deep learning
  • detection
  • segmentation
  • recognition
  • reconstruction
  • grouping
  • semantics
  • verification
  • identification

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
LSUN-Stanford Car Dataset: Enhancing Large-Scale Car Image Datasets Using Deep Learning for Usage in GAN Training
Appl. Sci. 2020, 10(14), 4913; https://doi.org/10.3390/app10144913 - 17 Jul 2020
Abstract
Currently there is no publicly available adequate dataset that could be used for training Generative Adversarial Networks (GANs) on car images. All available car datasets differ in noise, pose, and zoom levels. Thus, the objective of this work was to create an improved [...] Read more.
Currently there is no publicly available adequate dataset that could be used for training Generative Adversarial Networks (GANs) on car images. All available car datasets differ in noise, pose, and zoom levels. Thus, the objective of this work was to create an improved car image dataset that would be better suited for GAN training. To improve the performance of the GAN, we coupled the LSUN and Stanford car datasets. A new merged dataset was then pruned in order to adjust zoom levels and reduce the noise of images. This process resulted in fewer images that could be used for training, with increased quality though. This pruned dataset was evaluated by training the StyleGAN with original settings. Pruning the combined LSUN and Stanford datasets resulted in 2,067,710 images of cars with less noise and more adjusted zoom levels. The training of the StyleGAN on the LSUN-Stanford car dataset proved to be superior to the training with just the LSUN dataset by 3.7% using the Fréchet Inception Distance (FID) as a metric. Results pointed out that the proposed LSUN-Stanford car dataset is more consistent and better suited for training GAN neural networks than other currently available large car datasets. Full article
(This article belongs to the Special Issue Deep Image Semantic Segmentation and Recognition)
Show Figures

Figure 1

Back to TopTop