Next Article in Journal
Characterization and Lubrication of Tube-Guided Shape-Memory Alloy Actuators for Smart Textiles
Previous Article in Journal
Estimating Weight of Unknown Objects Using Active Thermography
 
 
Correction to Robotics 2019, 8(1), 4.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Correction

Correction: Bhagat, S.; et al. Deep Reinforcement Learning for Soft, Flexible Robots: Brief Review with Impending Challenges. Robotics 2019, 8, 4

by
Sarthak Bhagat
1,2,†,
Hritwick Banerjee
1,3,†,
Zion Tsz Ho Tse
4 and
Hongliang Ren
1,3,5,*
1
Department of Biomedical Engineering, Faculty of Engineering, 4 Engineering Drive 3, National University of Singapore, Singapore 117583, Singapore
2
Department of Electronics and Communications Engineering, Indraprastha Institute of Information Technology, New Delhi 110020, India
3
Singapore Institute for Neurotechnology (SINAPSE), Centre for Life Sciences, National University of Singapore, Singapore 117456, Singapore
4
School of Electrical & Computer Engineering, College of Engineering, The University of Georgia, Athens, GA 30602, USA
5
National University of Singapore (Suzhou) Research Institute (NUSRI), Suzhou Industrial Park, Suzhou 215123, China
*
Author to whom correspondence should be addressed.
These authors equally contributed towards this manuscript.
Robotics 2019, 8(4), 93; https://doi.org/10.3390/robotics8040093
Submission received: 9 October 2019 / Accepted: 12 October 2019 / Published: 28 October 2019
The authors wish to make the following corrections to this paper [1]:
  • In Figure 1 of this paper [1], the caption was revised with the permission from the publishers as “Various applications of SoRo. Reprinted (adapted) with permission from [20–22]. Copyright 2017, Elsevier B.V. Copyright 2016, American Association for the Advancement of Science. Copyright 2017, National Academy of Sciences.”
  • In Table 2 of this paper [1], the caption was revised with the permission from the publishers as “SoRo applied to achieve state-of-the-art results alongside sub-domains where its utilization with deep reinforcement learning (DRL) and imitation learning techniques presently occur. Pictures adapted with permission from [40,41]. Copyright 2014, Mary Ann Liebert, Inc., publishers. Copyright 2017, American Association for the Advancement of Science.”
  • In Figure 2 of this paper [1], the caption was revised with the permission from the publishers as “Training architecture of a Deep Q-Network (DQN) agent. Picture adapted with permission from [47]. Copyright 2018, American Association for the Advancement of Science.”
  • In Figure 3 of this paper [1], the caption was revised with the permission from the publishers as “Training architecture of a Deep Deterministic Policy Gradients (DDPG) agent. The blue lines portray the updated equations. Picture adapted with permission from [47]. Copyright 2018, American Association for the Advancement of Science.”
  • In Figure 5 of this paper [1], the caption was revised with the permission from the publishers as “Expected application of DRL techniques in the task of navigation. Inset adapted with permission from [72]. Copyright 2018, American Association for the Advancement of Science.”
  • In Figure 7 of this paper [1], the caption was revised with the permission from the publishers as “Soft Robot Simulation on SOFA using Soft-robotics toolkit. Figure adapted with permission from [137]. Copyright 2017, IEEE.”
  • In Figure 8 of this paper [1], the caption was revised with the permission from the publishers as “Training Architecture of CycleGAN and CyCADA [92,138]. Figure adapted with permission from [139]. Copyright 2018, Mary Ann Liebert, Inc., publishers.”
  • In Table 4 of this paper [1], the caption was revised with the permission from the publishers as, “Instances of bio-inspired soft robotics applications that make use DRL or Imitation Learning technologies. Picture adapted with permission from [47,162–164]. Copyright 2018, American Association for the Advancement of Science. Copyright 2016, Springer Nature Limited. Copyright 2011, IEEE. Copyright 2016, John Wiley Sons, Inc.”
The changes do not affect the scientific results. The manuscript will be updated and the original will remain online on the article webpage, with a reference to this Correction. The authors would like to apologize for any inconvenience caused to the readers by these changes.

References

  1. Bhagat, S.; Banerjee, H.; Ho Tse, Z.T.; Ren, H. Deep Reinforcement Learning for Soft, Flexible Robots: Brief Review with Impending Challenges. Robotics 2019, 8, 4. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Bhagat, S.; Banerjee, H.; Ho Tse, Z.T.; Ren, H. Correction: Bhagat, S.; et al. Deep Reinforcement Learning for Soft, Flexible Robots: Brief Review with Impending Challenges. Robotics 2019, 8, 4. Robotics 2019, 8, 93. https://doi.org/10.3390/robotics8040093

AMA Style

Bhagat S, Banerjee H, Ho Tse ZT, Ren H. Correction: Bhagat, S.; et al. Deep Reinforcement Learning for Soft, Flexible Robots: Brief Review with Impending Challenges. Robotics 2019, 8, 4. Robotics. 2019; 8(4):93. https://doi.org/10.3390/robotics8040093

Chicago/Turabian Style

Bhagat, Sarthak, Hritwick Banerjee, Zion Tsz Ho Tse, and Hongliang Ren. 2019. "Correction: Bhagat, S.; et al. Deep Reinforcement Learning for Soft, Flexible Robots: Brief Review with Impending Challenges. Robotics 2019, 8, 4" Robotics 8, no. 4: 93. https://doi.org/10.3390/robotics8040093

APA Style

Bhagat, S., Banerjee, H., Ho Tse, Z. T., & Ren, H. (2019). Correction: Bhagat, S.; et al. Deep Reinforcement Learning for Soft, Flexible Robots: Brief Review with Impending Challenges. Robotics 2019, 8, 4. Robotics, 8(4), 93. https://doi.org/10.3390/robotics8040093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop