Special Issue "Object Detection using Deep Learning for Autonomous Intelligent Robots"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 April 2019

Special Issue Editor

Guest Editor
Prof. Dr. António J. R. Neves

Department of Electronics, Telecommunications and Informatics, University of Aveiro, Portugal
Website | E-Mail
Interests: signal processing; image processing; object detection; robotics

Special Issue Information

Dear Colleagues,

Autonomous intelligent robots are dynamic systems consisting of an electronic controller coupled to a mechanical body, with the need of an adequate sensory system to perceive the environment where they operate. Digital cameras are one of the most commonly used sensors at the moment.

As these robots move towards more complex environments and applications, image-understanding algorithms that allow a more precise and detailed object recognition become crucial. In this context, in addition to classifying images, it is also necessary to precisely estimate the class and location of objects contained within the images, a problem known as object detection.

The most important advances in object detection were achieved due to improvements in object representation and machine learning models. In the last few years, deep neural networks (DNNs) have emerged as a powerful machine-learning model. They are deep architectures which have the capacity to learn powerful object representations/models without the need to manually design features.

Usually, we associate the use of deep learning with high-complexity processing systems, and this is a challenge when we think of how to use it in autonomous intelligent robots. However, recent advances in single-board computers and networks allow the use of these technologies in real-time on these types of intelligent systems.

The main aim of this Special Issue is to present novel approaches and results focusing on deep-learning approaches for the vision systems of intelligent robots. Contributions that explore both on-board implementations or distributed vision systems with modules running remotely from the robot are welcome.

Prof. Dr. António J. R. Neves
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep leaning
  • Neural networks
  • Object detection
  • Image processing
  • Real-time systems
  • Autonomous robots
  • Intelligent robots

Published Papers (1 paper)

View options order results:
result details:
Displaying articles 1-1
Export citation of selected articles as:

Research

Open AccessArticle Estimation of the Lateral Distance between Vehicle and Lanes Using Convolutional Neural Network and Vehicle Dynamics
Appl. Sci. 2018, 8(12), 2508; https://doi.org/10.3390/app8122508
Received: 25 October 2018 / Revised: 25 November 2018 / Accepted: 4 December 2018 / Published: 6 December 2018
PDF Full-text (5805 KB) | HTML Full-text | XML Full-text
Abstract
With the aim to achieve an accurate lateral distance between vehicle and lane boundaries during the road test of Lane Departure Warning and Lane Keeping Assist, this study proposes a recognition model to estimate the distance directly by training a deep neural network, [...] Read more.
With the aim to achieve an accurate lateral distance between vehicle and lane boundaries during the road test of Lane Departure Warning and Lane Keeping Assist, this study proposes a recognition model to estimate the distance directly by training a deep neural network, called LatDisLanes. The neural network model obtains the distance using two down-face cameras without data pre-processing and post-processing. Nevertheless, the accuracy of recognition is disrupted by inclination angle, but the bias is decreased using a proposed dynamic correction model. Furthermore, as training a model requires a large number of label images, an image synthesis algorithm that is based on the Image Quilting is proposed. The experiment on test data set shows that the accuracy of LatDisLanes is 94.78% and 99.94%, respectively, if the allowable error is 0.46 cm and 2.3 cm when the vehicle runs smoothly. In addition, a bigger error can be caused when inclination angle is greater than 3°, but the error can be reduced by proposing a dynamic correction model. Full article
Figures

Figure 1

Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top