applsci-logo

Journal Browser

Journal Browser

Artificial Intelligence(AI) in Robotics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 5176

Special Issue Editors


E-Mail Website
Guest Editor
School of Automation, Beijing Institute of Technology, Beijing 100081, China
Interests: computer vision; artificial intelligence; adaptive estimation; control and intelligent games; intelligent autonomous systems; robots

Special Issue Information

Dear Colleagues,

Robotic technology has played a key role in a number of areas, including daily life, industrial applications and medical applications. Artificial intelligence, in particular, machine learning, is crucial and important for intelligent robots or unmanned/autonomous systems, such as UAVs, UGVs, UUVs, cooperative robots, etc. For human beings, capabilities such as perceiving visual information, adapting to uncertain environments and making decisions to take action in a complex system make people distinct from animals. Significant progress has been made towards human-like intelligence in robotics; however, there are still many unresolved problems. In recent decades, techniques such as deep learning, reinforce learning, real-time learning, swarm intelligence and other emerging techniques such tiny-ML have been developed, which have been applied in robotics. We believe that the increasing demands and challenges posed by real-world problems involving robotic applications promote academic research of artificial intelligence as well as robotics. The purpose of this Special Issue is to bring together scientists, experts and engineers throughout the world to present and share their recent research results and innovative ideas related to artificial intelligence in robotics. Various applications of artificial intelligence in robotics, in particular, AI, through visual and motional information will also be covered by this Special Issue.

Prof. Dr. Hongbin Ma
Prof. Dr. Charlie Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • robotics
  • machine learning
  • swarm intelligence
  • intelligent perception

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 7204 KiB  
Article
Research on Rapid Recognition of Moving Small Targets by Robotic Arms Based on Attention Mechanisms
by Boyu Cao, Aishan Jiang, Jiacheng Shen and Jun Liu
Appl. Sci. 2024, 14(10), 3975; https://doi.org/10.3390/app14103975 - 7 May 2024
Viewed by 479
Abstract
For small target objects on fast-moving conveyor belts, traditional vision detection algorithms equipped with conventional robotic arms struggle to capture the long and short-range pixel dependencies crucial for accurate detection. This leads to high miss rates and low precision. In this study, we [...] Read more.
For small target objects on fast-moving conveyor belts, traditional vision detection algorithms equipped with conventional robotic arms struggle to capture the long and short-range pixel dependencies crucial for accurate detection. This leads to high miss rates and low precision. In this study, we integrate the traditional EMA (efficient multi-scale attention) algorithm with the c2f (channel-to-pixel) module from the original YOLOv8, alongside a Faster-Net module designed based on partial convolution concepts. This fusion results in the Faster-EMA-Net module, which greatly enhances the ability of the algorithm and robotic technologies to extract pixel dependencies for small targets, and improves perception of dynamic small target objects. Furthermore, by incorporating a small target semantic information enhancement layer into the multiscale feature fusion network, we aim to extract more expressive features for small targets, thereby boosting detection accuracy. We also address issues with training time and subpar performance on small targets in the original YOLOv8 algorithm by improving the loss function. Through experiments, we demonstrate that our attention-based visual detection algorithm effectively enhances accuracy and recall rates for fast-moving small targets, meeting the demands of real industrial scenarios. Our approach to target detection using industrial robotic arms is both practical and cutting-edge. Full article
(This article belongs to the Special Issue Artificial Intelligence(AI) in Robotics)
Show Figures

Figure 1

20 pages, 6230 KiB  
Article
Reward Function and Configuration Parameters in Machine Learning of a Four-Legged Walking Robot
by Arkadiusz Kubacki, Marcin Adamek and Piotr Baran
Appl. Sci. 2023, 13(18), 10298; https://doi.org/10.3390/app131810298 - 14 Sep 2023
Viewed by 906
Abstract
In contemporary times, the use of walking robots is gaining increasing popularity and is prevalent in various industries. The ability to navigate challenging terrains is one of the advantages that they have over other types of robots, but they also require more intricate [...] Read more.
In contemporary times, the use of walking robots is gaining increasing popularity and is prevalent in various industries. The ability to navigate challenging terrains is one of the advantages that they have over other types of robots, but they also require more intricate control mechanisms. One way to simplify this issue is to take advantage of artificial intelligence through reinforcement learning. The reward function is one of the conditions that governs how learning takes place, determining what actions the agent is willing to take based on the collected data. Another aspect to consider is the predetermined values contained in the configuration file, which describe the course of the training. The correct tuning of them is crucial for achieving satisfactory results in the teaching process. The initial phase of the investigation involved assessing the currently prevalent forms of kinematics for walking robots. Based on this evaluation, the most suitable design was selected. Subsequently, the Unity3D development environment was configured using an ML-Agents toolkit, which supports machine learning. During the experiment, the impacts of the values defined in the configuration file and the form of the reward function on the course of training were examined. Movement algorithms were developed for various modifications for learning to use artificial neural networks. Full article
(This article belongs to the Special Issue Artificial Intelligence(AI) in Robotics)
Show Figures

Figure 1

17 pages, 3549 KiB  
Article
A Lightweight High Definition Mapping Method Based on Multi-Source Data Fusion Perception
by Haina Song, Binjie Hu, Qinyan Huang, Yi Zhang and Jiwei Song
Appl. Sci. 2023, 13(5), 3264; https://doi.org/10.3390/app13053264 - 3 Mar 2023
Cited by 1 | Viewed by 1437
Abstract
In this paper, a lightweight, high-definition mapping method is proposed for autonomous driving to address the drawbacks of traditional mapping methods, such as high cost, low efficiency, and slow update frequency. The proposed method is based on multi-source data fusion perception and involves [...] Read more.
In this paper, a lightweight, high-definition mapping method is proposed for autonomous driving to address the drawbacks of traditional mapping methods, such as high cost, low efficiency, and slow update frequency. The proposed method is based on multi-source data fusion perception and involves generating local semantic maps (LSMs) using multi-sensor fusion on a vehicle and uploading multiple LSMs of the same road section, obtained through crowdsourcing, to a cloud server. An improved, two-stage semantic alignment algorithm, based on the semantic generalized iterative closest point (GICP), was then used to optimize the multi-trajectories pose on the cloud. Finally, an improved density clustering algorithm was proposed to instantiate the aligned semantic elements and generate vector semantic maps to improve mapping efficiency. Experimental results demonstrated the accuracy of the proposed method, with a horizontal error within 20 cm, a vertical error within 50 cm, and an average map size of 40 Kb/Km. The proposed method meets the requirements of being high definition, low cost, lightweight, robust, and up-to-date for autonomous driving. Full article
(This article belongs to the Special Issue Artificial Intelligence(AI) in Robotics)
Show Figures

Figure 1

17 pages, 3028 KiB  
Article
Time Efficiency Improvement in Quadruped Walking with Supervised Training Joint Model
by Chin Ean Yeoh, Min Sung Ahn, Soomin Choi and Hak Yi
Appl. Sci. 2023, 13(4), 2658; https://doi.org/10.3390/app13042658 - 18 Feb 2023
Viewed by 1327
Abstract
To generate stable walking of a quadruped, the complexity of the configuration of the robot involves a significant amount of optimization that decreases to its time efficiency. To address this issue, a machine learning method was used to build a simplified control policy [...] Read more.
To generate stable walking of a quadruped, the complexity of the configuration of the robot involves a significant amount of optimization that decreases to its time efficiency. To address this issue, a machine learning method was used to build a simplified control policy using joint models for the supervised training of quadruped robots. This study considered 12 joints for a four-legged robot, and each joint value was determined based on the conventional method of walking simulation and prepossessed, equaling 2508 sets of data. For data training, the multilayer perceptron model was used, and the optimized number of epochs used to train the model was 5000. The trained models were implemented in robot walking simulations, and they improved performance with an average distance error of 0.0719 m and a computational time as low as 91.98 s. Full article
(This article belongs to the Special Issue Artificial Intelligence(AI) in Robotics)
Show Figures

Figure 1

Back to TopTop