Special Issue "Intelligent Robotics"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 December 2019.

Special Issue Editors

Guest Editor
Prof. Dr. Nuno Lau Website E-Mail
Universidade de Aveiro, 3810-193 Aveiro, Portugal
Interests: intelligent robotics; Artificial Intelligence; simulation
Guest Editor
Prof. Dr. Luis Paulo Reis Website E-Mail
Universidade do Porto, Praça de Gomes Teixeira, 4099-002 Porto, Portugal
Interests: intelligent robotics; Artificial Intelligence; multiagent systems
Guest Editor
Prof. Dr. Rui P. Rocha Website E-Mail
Universidade de Coimbra, 3004-531 Coimbra, Portugal
Interests: multirobot systems; cooperative perception; autonomous robots

Special Issue Information

Daer Colleagues,

Robotics is a very important domain for Artificial Intelligence (AI) research. From the beginning of AI, robotics has always played an important role both in providing real problems that need intelligent behavior and in enabling AI to perform tasks that involve physical interaction with the real world. However, the Robotics and AI communities have often worked separately and with little sharing of the developments in both areas.

Intelligent Robotics, where AI and Robotics merge to provide better solutions, is an essential area when designing robots or teams of robots that perform complex tasks in environments that are shared with humans. These robots must be endowed with high levels of adaptability to changes in task or the environment, so as to enrich their performance over their lifetime and enable a richer and more natural interaction with humans. In some cases, these robots will be mobile, work in teams, and be connected to a larger ecosystem that, in addition to robots, encompasses other intelligent networked devices, thus scaling to arbitrarily larger distributed intelligent and pervasive systems. Simulation and modeling can play an important role. Intelligent robotics can provide improvements in many aspects of human life as important as health, mobility, work, education, recreation, and domestic tasks.

This Special Issue intends to provide a forum for the dissemination of works that exploit this synergy between AI and Intelligent Robotics to solve complex tasks. Recent developments in both fields, together with hardware developments that make available to robots and other intelligent physical agents a higher computing power and more capable sensors and actuators, provide the grounding for innovative solutions lying in the intersection of AI and Intelligent Robotics, which may be published in this Special Issue.

Prof. Dr. Nuno Lau
Prof. Dr. Luis Paulo Reis
Prof. Dr. Rui P. Rocha
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Autonomous robots
  • Cognitive robotics
  • Computer vision
  • Distributed multirobot or multiagent coordination
  • Embodied multiagent systems
  • Evolutionary robotics and swarm robotics
  • Humanoid robots
  • Human–robot interaction
  • Modeling and simulation of complex robots
  • Multirobot systems
  • Robot behavior engineering
  • Robot learning
  • Robot planning
  • SLAM, navigation and exploration
  • Social and service robots

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Target Points Tracking Control for Autonomous Cleaning Vehicle Based on the LSTM Network
Appl. Sci. 2019, 9(18), 3806; https://doi.org/10.3390/app9183806 - 11 Sep 2019
Abstract
In order to efficiently and exactly in tracking the desired path points, autonomous cleaning vehicles have to adapt their own behavior according to the perceived environmental information. This paper proposes a target points tracking control algorithm based on the Long Short-Term Memory network, [...] Read more.
In order to efficiently and exactly in tracking the desired path points, autonomous cleaning vehicles have to adapt their own behavior according to the perceived environmental information. This paper proposes a target points tracking control algorithm based on the Long Short-Term Memory network, which can generate the speed and yaw rate to arrive at the target point in real time. The target point is obtained by a parameter named foresight distance that is deduced based on the fuzzy control, whose inputs are the speed and yaw rate of the vehicle at the current point. The effectiveness of the proposed algorithm is illustrated by the simulation and field experiments. Compared with other classical algorithms, this algorithm can track the point sequence on straight path and multiple curvature path more accurately. The field experiment indicates the proposed controller is efficient in following the pre-defined path points, furthermore, it can make the autonomous cleaning vehicle run smoothly in the path which is disturbed by bounded disturbances. The distance errors can meet the actual requirement of the cleaning vehicle during the tracking process. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Figure 1

Open AccessArticle
3-D Point Cloud Registration Using Convolutional Neural Networks
Appl. Sci. 2019, 9(16), 3273; https://doi.org/10.3390/app9163273 - 09 Aug 2019
Abstract
This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper [...] Read more.
This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper addresses the time-consuming problem of 3-D point cloud registration which is essential for the closed-loop industrial automated assembly systems that demand fixed time for accurate pose estimation. Firstly, two different descriptors are developed in order to extract coarse and detailed features of these point cloud data sets for the purpose of creating training data sets according to diversified orientations. Secondly, in order to guarantee fast pose estimation in fixed time, a seemingly novel registration architecture by employing two consecutive convolutional neural network (CNN) models is proposed. After training, the proposed CNN architecture can estimate the rotation between the model point cloud and a data point cloud, followed by the translation estimation based on computing average values. By covering a smaller range of uncertainty of the orientation compared with a full range of uncertainty covered by the first CNN model, the second CNN model can precisely estimate the orientation of the 3-D point cloud. Finally, the performance of the algorithm proposed in this paper has been validated by experiments in comparison with baseline methods. Based on these results, the proposed algorithm significantly reduces the estimation time while maintaining high precision. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Graphical abstract

Open AccessArticle
A Low Overhead Mapping Scheme for Exploration and Representation in the Unknown Area
Appl. Sci. 2019, 9(15), 3089; https://doi.org/10.3390/app9153089 - 31 Jul 2019
Abstract
The grid map, representing area information with the number of cells, is a widely used mapping scheme for mobile robots and simultaneous localization and mapping (SLAM) processes. However, the tremendous amount of cells in a grid map for a detailed map representation results [...] Read more.
The grid map, representing area information with the number of cells, is a widely used mapping scheme for mobile robots and simultaneous localization and mapping (SLAM) processes. However, the tremendous amount of cells in a grid map for a detailed map representation results in overheads for memory space and computing paths in mobile robots. Therefore, to overcome the overhead of the grid map, this study proposes a new low overhead mapping scheme which the authors call as the Rmap that represents an area with variable sizes of rectangles instead of the number of cells in the grid map. This mapping scheme also provides an exploration path for obtaining new information for the unknown area. This study evaluated the performance of the Rmap in real environments as well as in simulation environments. The experiment results show that the Rmap can reduce the overhead of a grid map. In one of our experimental environments, the Rmap represented an area with 85% less memory than the grid map. The Rmap also showed better coverage performance compared with other previous algorithms. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Figure 1

Open AccessArticle
Deep Homography Estimation and Its Application to Wall Maps of Wall-Climbing Robots
Appl. Sci. 2019, 9(14), 2908; https://doi.org/10.3390/app9142908 - 20 Jul 2019
Abstract
When locating wall-climbing robots with vision-based methods, locating and controlling the wall-climbing robot in the pixel coordinate of the wall map is an effective alternative that eliminates the need to calibrate the internal and external parameters of the camera. The estimation accuracy of [...] Read more.
When locating wall-climbing robots with vision-based methods, locating and controlling the wall-climbing robot in the pixel coordinate of the wall map is an effective alternative that eliminates the need to calibrate the internal and external parameters of the camera. The estimation accuracy of the homography matrix between the camera image and the wall map directly impacts the pixel positioning accuracy of the wall-climbing robot in the wall map. In this study, we focused on the homography estimation between the camera image and wall map. We proposed HomographyFpnNet and obtained a smaller homography estimation error for a center-aligned image pair compared with the state of the art. The proposed hierarchical HomographyFpnNet for a non-center-aligned image pair significantly outperforms the method based on artificially designed features + Random Sample Consensus. The experiments conducted with a trained three-stage hierarchical HomographyFpnNet model on wall images of climbing robots also achieved small mean corner pixel error and proved its potential for estimating the homography between the wall map and camera images. The three-stage hierarchical HomographyFpnNet model has an average processing time of 10.8 ms on a GPU. The real-time processing speed satisfies the requirements of wall-climbing robots. Full article
(This article belongs to the Special Issue Intelligent Robotics)
Show Figures

Figure 1

Back to TopTop