sensors-logo

Journal Browser

Journal Browser

AI Based Autonomous Robots

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (20 August 2022) | Viewed by 13850

Special Issue Editor


E-Mail Website
Guest Editor
Computer Systems Department, University of Castilla-La Mancha, Campus Universitario s/n, 02071 Albacete, Spain
Interests: robotics; computer vision; semantic localization; 3D processing

Special Issue Information

Dear Colleagues,

The development of autonomous robots requires proper management of the input data to make the right decision. The data must be processed to extract and store relevant information so that the robot can perform autonomous actions based on the acquired knowledge. There are different topics to be addressed like planning, localization, navigation, mapping or multimodal interaction, and in the recent years, some of the most promising proposals rely on the use of last artificial intelligence technologies. The Special Issue on AI based Autonomous Robots encourages researchers to present proposals related to robotics from an artificial intelligence perspective. Concretely, we are looking for solutions providing mobile robots the capabilities to behave in an autonomous manner.

Dr. Jesus Martínez-Gómez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robotics
  • computer vision
  • localization
  • planning
  • human-robot interaction
  • 3D processing
  • artificial intelligence
  • machine learning
  • deep learning
  • navigation
  • mapping

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 7126 KiB  
Article
A Generalized Laser Simulator Algorithm for Mobile Robot Path Planning with Obstacle Avoidance
by Aisha Muhammad, Mohammed A. H. Ali, Sherzod Turaev, Rawad Abdulghafor, Ibrahim Haruna Shanono, Zaid Alzaid, Abdulrahman Alruban, Rana Alabdan, Ashit Kumar Dutta and Sultan Almotairi
Sensors 2022, 22(21), 8177; https://doi.org/10.3390/s22218177 - 25 Oct 2022
Cited by 3 | Viewed by 2418
Abstract
This paper aims to develop a new mobile robot path planning algorithm, called generalized laser simulator (GLS), for navigating autonomously mobile robots in the presence of static and dynamic obstacles. This algorithm enables a mobile robot to identify a feasible path while finding [...] Read more.
This paper aims to develop a new mobile robot path planning algorithm, called generalized laser simulator (GLS), for navigating autonomously mobile robots in the presence of static and dynamic obstacles. This algorithm enables a mobile robot to identify a feasible path while finding the target and avoiding obstacles while moving in complex regions. An optimal path between the start and target point is found by forming a wave of points in all directions towards the target position considering target minimum and border maximum distance principles. The algorithm will select the minimum path from the candidate points to target while avoiding obstacles. The obstacle borders are regarded as the environment’s borders for static obstacle avoidance. However, once dynamic obstacles appear in front of the GLS waves, the system detects them as new dynamic obstacle borders. Several experiments were carried out to validate the effectiveness and practicality of the GLS algorithm, including path-planning experiments in the presence of obstacles in a complex dynamic environment. The findings indicate that the robot could successfully find the correct path while avoiding obstacles. The proposed method is compared to other popular methods in terms of speed and path length in both real and simulated environments. According to the results, the GLS algorithm outperformed the original laser simulator (LS) method in path and success rate. With application of the all-direction border scan, it outperforms the A-star (A*) and PRM algorithms and provides safer and shorter paths. Furthermore, the path planning approach was validated for local planning in simulation and real-world tests, in which the proposed method produced the best path compared to the original LS algorithm. Full article
(This article belongs to the Special Issue AI Based Autonomous Robots)
Show Figures

Figure 1

17 pages, 1689 KiB  
Article
Leveraging Expert Demonstration Features for Deep Reinforcement Learning in Floor Cleaning Robot Navigation
by Reinis Cimurs and Emmanuel Alejandro Merchán-Cruz
Sensors 2022, 22(20), 7750; https://doi.org/10.3390/s22207750 - 12 Oct 2022
Cited by 1 | Viewed by 1514
Abstract
In this paper, a Deep Reinforcement Learning (DRL)-based approach for learning mobile cleaning robot navigation commands that leverage experience from expert demonstrations is presented. First, expert demonstrations of robot motion trajectories in simulation in the cleaning robot domain are collected. The relevant motion [...] Read more.
In this paper, a Deep Reinforcement Learning (DRL)-based approach for learning mobile cleaning robot navigation commands that leverage experience from expert demonstrations is presented. First, expert demonstrations of robot motion trajectories in simulation in the cleaning robot domain are collected. The relevant motion features with regard to the distance to obstacles and the heading difference towards the navigation goal are extracted. Each feature weight is optimized with respect to the collected data, and the obtained values are assumed as representing the optimal motion of the expert navigation. A reward function is created based on the feature values to train a policy with semi-supervised DRL, where an immediate reward is calculated based on the closeness to the expert navigation. The presented results show the viability of this approach with regard to robot navigation as well as the reduced training time. Full article
(This article belongs to the Special Issue AI Based Autonomous Robots)
Show Figures

Figure 1

25 pages, 15281 KiB  
Article
GR-ConvNet v2: A Real-Time Multi-Grasp Detection Network for Robotic Grasping
by Sulabh Kumra, Shirin Joshi and Ferat Sahin
Sensors 2022, 22(16), 6208; https://doi.org/10.3390/s22166208 - 18 Aug 2022
Cited by 10 | Viewed by 3910
Abstract
We propose a dual-module robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from the n-channel image of the scene. We present an improved version of the Generative Residual Convolutional Neural Network (GR-ConvNet v2) model that [...] Read more.
We propose a dual-module robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from the n-channel image of the scene. We present an improved version of the Generative Residual Convolutional Neural Network (GR-ConvNet v2) model that can generate robust antipodal grasps from n-channel image input at real-time speeds (20 ms). We evaluated the proposed model architecture on three standard datasets and achieved a new state-of-the-art accuracy of 98.8%, 95.1%, and 97.4% on Cornell, Jacquard and Graspnet grasping datasets, respectively. Empirical results show that our model significantly outperformed the prior work with a stricter IoU-based grasp detection metric. We conducted a suite of tests in simulation and the real world on a diverse set of previously unseen objects with adversarial geometry and household items. We demonstrate the adaptability of our approach by directly transferring the trained model to a 7 DoF robotic manipulator with a grasp success rate of 95.4% and 93.0% on novel household and adversarial objects, respectively. Furthermore, we validate the generalization capability of our pixel-wise grasp prediction model by validating it on complex Ravens-10 benchmark tasks, some of which require closed-loop visual feedback for multi-step sequencing. Full article
(This article belongs to the Special Issue AI Based Autonomous Robots)
Show Figures

Figure 1

25 pages, 9636 KiB  
Article
Real-Time People Re-Identification and Tracking for Autonomous Platforms Using a Trajectory Prediction-Based Approach
by Alexandra Ștefania Ghiță and Adina Magda Florea
Sensors 2022, 22(15), 5856; https://doi.org/10.3390/s22155856 - 5 Aug 2022
Cited by 3 | Viewed by 2097
Abstract
Currently, the importance of autonomous operating devices is rising with the increasing number of applications that run on robotic platforms or self-driving cars. The context of social robotics assumes that robotic platforms operate autonomously in environments where people perform their daily activities. The [...] Read more.
Currently, the importance of autonomous operating devices is rising with the increasing number of applications that run on robotic platforms or self-driving cars. The context of social robotics assumes that robotic platforms operate autonomously in environments where people perform their daily activities. The ability to re-identify the same people through a sequence of images is a critical component for meaningful human-robot interactions. Considering the quick reactions required by a self-driving car for safety considerations, accurate real-time tracking and people trajectory prediction are mandatory. In this paper, we introduce a real-time people re-identification system based on a trajectory prediction method. We tackled the problem of trajectory prediction by introducing a system that combines semantic information from the environment with social influence from the other participants in the scene in order to predict the motion of each individual. We evaluated the system considering two possible case studies, social robotics and autonomous driving. In the context of social robotics, we integrated the proposed re-identification system as a module into the AMIRO framework that is designed for social robotic applications and assistive care scenarios. We performed multiple experiments in order to evaluate the performance of our proposed method, considering both the trajectory prediction component and the person re-identification system. We assessed the behaviour of our method on existing datasets and on real-time acquired data to obtain a quantitative evaluation of the system and a qualitative analysis. We report an improvement of over 5% for the MOTA metric when comparing our re-identification system with the existing module, on both evaluation scenarios, social robotics and autonomous driving. Full article
(This article belongs to the Special Issue AI Based Autonomous Robots)
Show Figures

Figure 1

20 pages, 2500 KiB  
Article
RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments
by Andrew K. Mackay, Luis Riazuelo and Luis Montano
Sensors 2022, 22(10), 3847; https://doi.org/10.3390/s22103847 - 19 May 2022
Cited by 4 | Viewed by 2782
Abstract
Autonomous navigation in dynamic environments where people move unpredictably is an essential task for service robots in real-world populated scenarios. Recent works in reinforcement learning (RL) have been applied to autonomous vehicle driving and to navigation around pedestrians. In this paper, we present [...] Read more.
Autonomous navigation in dynamic environments where people move unpredictably is an essential task for service robots in real-world populated scenarios. Recent works in reinforcement learning (RL) have been applied to autonomous vehicle driving and to navigation around pedestrians. In this paper, we present a novel planner (reinforcement learning dynamic object velocity space, RL-DOVS) based on an RL technique for dynamic environments. The method explicitly considers the robot kinodynamic constraints for selecting the actions in every control period. The main contribution of our work is to use an environment model where the dynamism is represented in the robocentric velocity space as input to the learning system. The use of this dynamic information speeds the training process with respect to other techniques that learn directly either from raw sensors (vision, lidar) or from basic information about obstacle location and kinematics. We propose two approaches using RL and dynamic obstacle velocity (DOVS), RL-DOVS-A, which automatically learns the actions having the maximum utility, and RL-DOVS-D, in which the actions are selected by a human driver. Simulation results and evaluation are presented using different numbers of active agents and static and moving passive agents with random motion directions and velocities in many different scenarios. The performance of the technique is compared with other state-of-the-art techniques for solving navigation problems in environments such as ours. Full article
(This article belongs to the Special Issue AI Based Autonomous Robots)
Show Figures

Figure 1

Back to TopTop