Nonlinear Control and Neural Networks in Robotics

A special issue of Robotics (ISSN 2218-6581). This special issue belongs to the section "Sensors and Control in Robotics".

Deadline for manuscript submissions: closed (15 November 2022) | Viewed by 13707

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering and the NanoScience Technology Center, University of Central Florida, Orlando, FL 32816, USA
Interests: assistive robotics; human–robot interaction; nonlinear control theory and applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Mechanical and Aerospace Engineering, Oklahoma State University, Stillwater, OK, USA
Interests: optimal control; dynamic programming; nonlinear systems; control theory; data-driven control; system identification; applied mathematics; rehabilitation engineering; biomechanics; robotics; unmanned air vehicles; autonomous underwater vehicles

Special Issue Information

Dear Colleagues, 

Robotics is in the midst of a revolution spanning the last two decades because of the confluence of machine (deep) learning and AI, parallel realtime desktop/mobile processing due to advances in GPU and CPU architectures, access to cloud computing and data storage, creation of public-domain datasets, physics-based simulators, and last but not the least, major advancements in sensing (e.g., 2D/3D Vision, Haptics, Force/Torque Sensing) and actuation (e.g., Series Elastic Actuators). Nonlinear Control Theory in robotics had already made theoretical advances in the last decade of the 20th century such as Global Output Stability Results, Passivity Based Control, Impedance and Admittance Control, the theoretical treatment of Kinematically Redundant Robots, 2D/3D and 2.5D Visual Servoing, and several results in Rigid-Link Flexible Joint Robots which were the precursors for the Series Elastic Actuators of the day. A big line of theoretical research in Time Delay Systems in the controls community was largely motivated by the desire to teleoperate robots across large distances with bandwidth limited communication links. A happy marriage of all these systems theoretic advances to ubiquitously accessible real-time computation, advanced sensing, and accurate feedforward modeling based on the latest iterations of neural networks such as Multi-Layer Perceptrons, Convolutional Neural Networks, and Recurrent Neural Networks has led to situationally aware robots and cobots (collaborative robots) that have been able to shed their caged existence and merge/mingle with human-workers in a collaborative fashion.

This Special Issue is designed to capture some of these advances at the crossroads of nonlinear control theory and advanced feedforward modeling. Novel theoretical results are encouraged as are advancements in technology based on the underlying emerging techniques in controls and deep learning. We will also welcome review papers that will provide succinct coverage of the timeline of advancement in controls and robotics from the late 90s to the current iteration. 

Prof. Dr. Aman Behal
Dr. Rushikesh Kamalapurkar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Robotics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-Layer Perceptrons (MLP)
  • Convolutional Neural Networks (CNN)
  • Recurrent Neural Networks (RNN) applications in robots and cobots
  • adaptive and robust control design
  • position, force, impedance, and/or hybrid control
  • switching control
  • optimal control
  • physical human–robot interaction
  • wearable robots
  • intelligent orthotics and prosthetics
  • bipeds and quadrupeds
  • snake-like robots
  • hopping robots
  • teleoperation
  • mobility and manipulation

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 9423 KiB  
Article
Deep Reinforcement Learning for Autonomous Dynamic Skid Steer Vehicle Trajectory Tracking
by Sandeep Srikonda, William Robert Norris, Dustin Nottage and Ahmet Soylemezoglu
Robotics 2022, 11(5), 95; https://doi.org/10.3390/robotics11050095 - 09 Sep 2022
Cited by 3 | Viewed by 2120
Abstract
Designing controllers for skid-steered wheeled robots is complex due to the interaction of the tires with the ground and wheel slip due to the skid-steer driving mechanism, leading to nonlinear dynamics. Due to the recent success of reinforcement learning algorithms for mobile robot [...] Read more.
Designing controllers for skid-steered wheeled robots is complex due to the interaction of the tires with the ground and wheel slip due to the skid-steer driving mechanism, leading to nonlinear dynamics. Due to the recent success of reinforcement learning algorithms for mobile robot control, the Deep Deterministic Policy Gradients (DDPG) was successfully implemented and an algorithm was designed for continuous control problems. The complex dynamics of the vehicle model were dealt with and the advantages of deep neural networks were leveraged for their generalizability. Reinforcement learning was used to gather information and train the agent in an unsupervised manner. The performance of the trained policy on the six degrees of freedom dynamic model simulation was demonstrated with ground force interactions. The system met the requirement to stay within the distance of half the vehicle width from reference paths. Full article
(This article belongs to the Special Issue Nonlinear Control and Neural Networks in Robotics)
Show Figures

Figure 1

23 pages, 6989 KiB  
Article
Robust Trajectory-Tracking for a Bi-Copter Drone Using INDI: A Gain Tuning Multi-Objective Approach
by Maryam Taherinezhad, Alejandro Ramirez-Serrano and Arian Abedini
Robotics 2022, 11(5), 86; https://doi.org/10.3390/robotics11050086 - 30 Aug 2022
Cited by 8 | Viewed by 4907
Abstract
This paper presents an optimized robust trajectory control system for an autonomous tiltrotor bi-copter based on an incremental nonlinear dynamic inversion (INDI) strategy combined with a set of PID/PD controllers. The methodology includes a lower level, fast attitude control action using an incremental [...] Read more.
This paper presents an optimized robust trajectory control system for an autonomous tiltrotor bi-copter based on an incremental nonlinear dynamic inversion (INDI) strategy combined with a set of PID/PD controllers. The methodology includes a lower level, fast attitude control action using an incremental nonlinear dynamic inversion (INDI) strategy, which is driven by a higher level, slow trajectory control action that uses nonlinear dynamic inversion (NDI). The nonlinear dynamic model of the drone is derived, and the basis of the motion and the design of the attitude and position stabilizing controllers are discussed. To develop and test the suggested controller, a circle-shaped flight profile is simulated. The linear control providing inputs to the NDI and INDI controllers is tuned via a novel multi-objective optimization auto-tuning method using the non-dominated sorting genetic algorithm II (NSGA-II). The tracking and disturbance rejection optimization is achieved via the use of the integral of time multiplied by the absolute error (ITAE) and the integral of the square of the error (ISE) objective functions, which are optimized concurrently. The simulation results reveal that the proposed control design outperforms the traditional dynamic inversion controller design and demonstrate that the developed INDI + PID/PD controller possesses exceptional accuracy and performance, enabling the tiltrotor bi-copter to track the given trajectory. Furthermore, the paper shows that the proposed controller produces 40% lower overshoot and settling time as measured with respect to previous backstepping controllers reported in the literature. The robustness of the controller is validated through diverse tests where the aircraft is subjected to external (wind gust) disturbances. Full article
(This article belongs to the Special Issue Nonlinear Control and Neural Networks in Robotics)
Show Figures

Figure 1

17 pages, 3478 KiB  
Article
A Deep Reinforcement-Learning Approach for Inverse Kinematics Solution of a High Degree of Freedom Robotic Manipulator
by Aryslan Malik, Yevgeniy Lischuk, Troy Henderson and Richard Prazenica
Robotics 2022, 11(2), 44; https://doi.org/10.3390/robotics11020044 - 02 Apr 2022
Cited by 13 | Viewed by 5660
Abstract
The foundation and emphasis of robotic manipulator control is Inverse Kinematics (IK). Due to the complexity of derivation, difficulty of computation, and redundancy, traditional IK solutions pose numerous challenges to the operation of a variety of robotic manipulators. This paper develops a Deep [...] Read more.
The foundation and emphasis of robotic manipulator control is Inverse Kinematics (IK). Due to the complexity of derivation, difficulty of computation, and redundancy, traditional IK solutions pose numerous challenges to the operation of a variety of robotic manipulators. This paper develops a Deep Reinforcement Learning (RL) approach for solving the IK problem of a 7-Degree of Freedom (DOF) robotic manipulator using Product of Exponentials (PoE) as a Forward Kinematics (FK) computation tool and the Deep Q-Network (DQN) as an IK solver. The selected approach is architecturally simpler, making it faster and easier to implement, as well as more stable, because it is less sensitive to hyperparameters than continuous action spaces algorithms. The algorithm is designed to produce joint-space trajectories from a given end-effector trajectory. Different network architectures were explored and the output of the DQN was implemented experimentally on a Sawyer robotic arm. The DQN was able to find different trajectories corresponding to a specified Cartesian path of the end-effector. The network agent was able to learn random Bézier and straight-line end-effector trajectories in a reasonable time frame with good accuracy, demonstrating that even though DQN is mainly used in discrete solution spaces, it could be applied to generate joint space trajectories. Full article
(This article belongs to the Special Issue Nonlinear Control and Neural Networks in Robotics)
Show Figures

Figure 1

Back to TopTop