Next Issue
Previous Issue

Table of Contents

Robotics, Volume 7, Issue 1 (March 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Teleoperated robots enable safe task completion in hazardous environments. But controlling [...] Read more.
View options order results:
result details:
Displaying articles 1-14
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

Open AccessEditorial Acknowledgement to Reviewers of Robotics in 2017
Robotics 2018, 7(1), 6; doi:10.3390/robotics7010006
Received: 11 January 2018 / Revised: 11 January 2018 / Accepted: 11 January 2018 / Published: 11 January 2018
PDF Full-text (659 KB) | HTML Full-text | XML Full-text
Abstract
Peer review is an essential part in the publication process, ensuring that Robotics maintains high quality standards for its published papers[...] Full article

Research

Jump to: Editorial, Review, Other

Open AccessFeature PaperArticle Automation of Electrical Cable Harnesses Testing
Robotics 2018, 7(1), 1; doi:10.3390/robotics7010001
Received: 26 October 2017 / Revised: 13 December 2017 / Accepted: 19 December 2017 / Published: 21 December 2017
PDF Full-text (6805 KB) | HTML Full-text | XML Full-text
Abstract
Traditional automated systems, such as industrial robots, are applied in well-structured environments, and many automated systems have a limited adaptability to deal with complexity and uncertainty; therefore, the applications of industrial robots in small- and medium-sized enterprises (SMEs) are very limited. The majority
[...] Read more.
Traditional automated systems, such as industrial robots, are applied in well-structured environments, and many automated systems have a limited adaptability to deal with complexity and uncertainty; therefore, the applications of industrial robots in small- and medium-sized enterprises (SMEs) are very limited. The majority of manual operations in SMEs are too complicated for automation. The rapidly developed information technologies (IT) has brought new opportunities for the automation of manufacturing and assembly processes in the ill-structured environments. Note that an automation solution should be designed to meet the given requirements of the specified application, and it differs from one application to another. In this paper, we look into the feasibility of automated testing for electric cable harnesses, and our focus is on some of the generic strategies for the improvement of the adaptability of automation solutions. Especially, the concept of modularization is adopted in developing hardware and software to maximize system adaptability in testing a wide scope of products. A proposed system has been implemented, and the system performances have been evaluated by executing tests on actual products. The testing experiments have shown that the automated system outperformed manual operations greatly in terms of cost-saving, productivity and reliability. Due to the potential of increasing system adaptability and cost reduction, the presented work has its theoretical and practical significance for an extension for other automation solutions in SMEs. Full article
(This article belongs to the Special Issue Robust and Resilient Robots)
Figures

Figure 1

Open AccessArticle An Improved Indoor Robot Human-Following Navigation Model Using Depth Camera, Active IR Marker and Proximity Sensors Fusion
Robotics 2018, 7(1), 4; doi:10.3390/robotics7010004
Received: 5 October 2017 / Revised: 2 January 2018 / Accepted: 2 January 2018 / Published: 6 January 2018
PDF Full-text (14902 KB) | HTML Full-text | XML Full-text
Abstract
Creating a navigation system for autonomous companion robots has always been a difficult process, which must contend with a dynamically changing environment, which is populated by a myriad of obstructions and an unspecific number of people, other than the intended person, to follow.
[...] Read more.
Creating a navigation system for autonomous companion robots has always been a difficult process, which must contend with a dynamically changing environment, which is populated by a myriad of obstructions and an unspecific number of people, other than the intended person, to follow. This study documents the implementation of an indoor autonomous robot navigation model, based on multi-sensor fusion, using Microsoft Robotics Developer Studio 4 (MRDS). The model relies on a depth camera, a limited array of proximity sensors and an active IR marker tracking system. This allows the robot to lock onto the correct target for human-following, while approximating the best starting direction to begin maneuvering around obstacles for minimum required motion. The system is implemented according to a navigation algorithm that transforms the data from all three types of sensors into tendency arrays and fuses them to determine whether to take a leftward or rightward route around an encountered obstacle. The decision process considers visible short, medium and long-range obstructions and the current position of the target person. The system is implemented using MRDS and its functional test performance is presented over a series of Virtual Simulation Environment scenarios, greenlighting further extensive benchmark simulations. Full article
Figures

Figure 1

Open AccessArticle Close Range Tracking of an Uncooperative Target in a Sequence of Photonic Mixer Device (PMD) Images
Robotics 2018, 7(1), 5; doi:10.3390/robotics7010005
Received: 22 November 2017 / Revised: 18 December 2017 / Accepted: 25 December 2017 / Published: 10 January 2018
PDF Full-text (7054 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a pose estimation routine for tracking attitude and position of an uncooperative tumbling spacecraft during close range rendezvous. The key innovation is the usage of a Photonic Mixer Device (PMD) sensor for the first time during space proximity for tracking
[...] Read more.
This paper presents a pose estimation routine for tracking attitude and position of an uncooperative tumbling spacecraft during close range rendezvous. The key innovation is the usage of a Photonic Mixer Device (PMD) sensor for the first time during space proximity for tracking the pose of the uncooperative target. This sensor requires lower power consumption and higher resolution if compared with existing flash Light Identification Detection and Ranging (LiDAR) sensors. In addition, the PMD sensor provides two different measurements at the same time: depth information (point cloud) and amplitude of the reflected signal, which generates a grayscale image. In this paper, a hybrid model-based navigation technique that employs both measurements is proposed. The principal pose estimation technique is the iterative closed point algorithm with reverse calibration, which relies on the depth image. The second technique is an image processing pipeline that generates a set of 2D-to-3D feature correspondences between amplitude image and spacecraft model followed by the Efficient Perspective-n-Points (EPnP) algorithm for pose estimation. In this way, we gain a redundant estimation of the target’s current state in real-time without hardware redundancy. The proposed navigation methodology is tested in the German Aerospace Center (DLR)’s European Proximity Operations Simulator. The hybrid navigation technique shows the capability to ensure robust pose estimation of an uncooperative tumbling target under severe illumination conditions. In fact, the EPnP-based technique allows to overcome the limitations of the primary technique when harsh illumination conditions arise. Full article
Figures

Figure 1

Open AccessArticle A Closed Loop Inverse Kinematics Solver Intended for Offline Calculation Optimized with GA
Robotics 2018, 7(1), 7; doi:10.3390/robotics7010007
Received: 10 November 2017 / Revised: 8 January 2018 / Accepted: 12 January 2018 / Published: 22 January 2018
PDF Full-text (510 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a simple approach to building a robotic control system. Instead of a conventional control system which solves the inverse kinematics in real-time as the robot moves, an alternative approach where the inverse kinematics is calculated ahead of time is presented.
[...] Read more.
This paper presents a simple approach to building a robotic control system. Instead of a conventional control system which solves the inverse kinematics in real-time as the robot moves, an alternative approach where the inverse kinematics is calculated ahead of time is presented. This approach reduces the complexity and code necessary for the control system. Robot control systems are usually implemented in low level programming language. This new approach enables the use of high level programming for the complex inverse kinematics problem. For our approach, we implement a program to solve the inverse kinematics, called the Inverse Kinematics Solver (IKS), in Java, with a simple graphical user interface (GUI) to load a file with desired end effector poses and edit the configuration of the robot using the Denavit-Hartenberg (DH) convention. The program uses the closed-loop inverse kinematics (CLIK) algorithm to solve the inverse kinematics problem. As an example, the IKS was set up to solve the kinematics for a custom built serial link robot. The kinematics for the custom robot is presented, and an example of input and output files is also presented. Additionally, the gain of the loop in the IKS is optimized using a GA, resulting in almost a 50% decrease in computational time. Full article
Figures

Figure 1

Open AccessArticle High Performance Motion-Planner Architecture for Hardware-In-the-Loop System Based on Position-Based-Admittance-Control
Robotics 2018, 7(1), 8; doi:10.3390/robotics7010008
Received: 8 November 2017 / Revised: 16 January 2018 / Accepted: 23 January 2018 / Published: 1 February 2018
Cited by 1 | PDF Full-text (9355 KB) | HTML Full-text | XML Full-text
Abstract
This article focuses on a Hardware-In-the-Loop application developed from the advanced energy field project LIFES50+. The aim is to replicate, inside a wind gallery test facility, the combined effect of aerodynamic and hydrodynamic loads on a floating wind turbine model for offshore energy
[...] Read more.
This article focuses on a Hardware-In-the-Loop application developed from the advanced energy field project LIFES50+. The aim is to replicate, inside a wind gallery test facility, the combined effect of aerodynamic and hydrodynamic loads on a floating wind turbine model for offshore energy production, using a force controlled robotic device, emulating floating substructure’s behaviour. In addition to well known real-time Hardware-In-the-Loop (HIL) issues, the particular application presented has stringent safety requirements of the HIL equipment and difficult to predict operating conditions, so that extra computational efforts have to be spent running specific safety algorithms and achieving desired performance. To meet project requirements, a high performance software architecture based on Position-Based-Admittance-Control (PBAC) is presented, combining low level motion interpolation techniques, efficient motion planning, based on buffer management and Time-base control, and advanced high level safety algorithms, implemented in a rapid real-time control architecture. Full article
Figures

Figure 1

Open AccessArticle TeMoto: Intuitive Multi-Range Telerobotic System with Natural Gestural and Verbal Instruction Interface
Robotics 2018, 7(1), 9; doi:10.3390/robotics7010009
Received: 30 November 2017 / Revised: 8 January 2018 / Accepted: 18 January 2018 / Published: 1 February 2018
PDF Full-text (5270 KB) | HTML Full-text | XML Full-text
Abstract
Teleoperated mobile robots, equipped with object manipulation capabilities, provide safe means for executing dangerous tasks in hazardous environments without putting humans at risk. However, mainly due to a communication delay, complex operator interfaces and insufficient Situational Awareness (SA), the task productivity of telerobots
[...] Read more.
Teleoperated mobile robots, equipped with object manipulation capabilities, provide safe means for executing dangerous tasks in hazardous environments without putting humans at risk. However, mainly due to a communication delay, complex operator interfaces and insufficient Situational Awareness (SA), the task productivity of telerobots remains inferior to human workers. This paper addresses the shortcomings of telerobots by proposing a combined approach of (i) a scalable and intuitive operator interface with gestural and verbal input, (ii) improved Situational Awareness (SA) through sensor fusion according to documented best practices, (iii) integrated virtual fixtures for task simplification and minimizing the operator’s cognitive burden and (iv) integrated semiautonomous behaviors that further reduce cognitive burden and negate the impact of communication delays, execution latency and/or failures. The proposed teleoperation system, TeMoto, is implemented using ROS (Robot Operating System) to ensure hardware agnosticism, extensibility and community access. The operator’s command interface consists of a Leap Motion Controller for hand tracking, Griffin PowerMate USB as turn knob for scaling and a microphone for speech input. TeMoto is evaluated on multiple robots including two mobile manipulator platforms. In addition to standard, task-specific evaluation techniques (completion time, user studies, number of steps, etc.)—which are platform and task dependent and thus difficult to scale—this paper presents additional metrics for evaluating the user interface including task-independent criteria for measuring generalized (i) task completion efficiency and (ii) operator context switching. Full article
Figures

Open AccessArticle Workspace Limiting Strategy for 6 DOF Force Controlled PKMs Manipulating High Inertia Objects
Robotics 2018, 7(1), 10; doi:10.3390/robotics7010010
Received: 8 December 2017 / Revised: 31 January 2018 / Accepted: 31 January 2018 / Published: 5 February 2018
Cited by 1 | PDF Full-text (4812 KB) | HTML Full-text | XML Full-text
Abstract
This article describes an efficient and effective strategy for limiting the workspace of a six degrees of freedom parallel manipulator, with challenging motion smoothness requirements due to both the high inertia objects carried by the end effector and the pose references coming from
[...] Read more.
This article describes an efficient and effective strategy for limiting the workspace of a six degrees of freedom parallel manipulator, with challenging motion smoothness requirements due to both the high inertia objects carried by the end effector and the pose references coming from a force feedback loop. Firstly, a suitable formulation of the workspace is studied, distinguishing between different conventions and procedures. Thereafter a discrete and analytical formulation of the workspace is obtained and developed in order to suit this application. Having obtained the limits, a methodology to evaluate the robot pose is discussed, taking into account the reference pose buffering technique and the real time pose estimation through the numeric solution of the nonlinear forward kinematics equations. The safety algorithm designed checks the actual robot pose and future poses to be commanded, and takes control of the reference pose generation process, if an exit of the safety workspace is detected. The result obtained is a soft compliant surface within which the robot is free to move, but outside of which a “force field” pushes the robot end-effector to return smoothly. To reach this objective, the control deflects the end effector trajectory safely and smoothly and moves it back to within the workspace limits. Nevertheless, this preserves the continuity of the velocity and controls the acceleration, to avoid dangerous vibrations and shocks. Simulation and experimental result tests are conducted to verify the algorithm effectiveness and the efficient implementation. Full article
Figures

Figure 1

Open AccessArticle Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot
Robotics 2018, 7(1), 11; doi:10.3390/robotics7010011
Received: 31 August 2017 / Revised: 22 January 2018 / Accepted: 30 January 2018 / Published: 5 February 2018
PDF Full-text (904 KB) | HTML Full-text | XML Full-text
Abstract
The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires
[...] Read more.
The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires accurate and stable fruit detection based on video images. To segment an image into background and foreground, thresholding techniques are commonly used. The varying illumination conditions in the unstructured greenhouse environment often cause shadows and overexposure. Furthermore, the color of the fruits to be harvested varies over the season. All this makes it sub-optimal to use fixed pre-selected thresholds. In this paper we suggest an adaptive image-dependent thresholding method. A variant of reinforcement learning (RL) is used with a reward function that computes the similarity between the segmented image and the labeled image to give feedback for action selection. The RL-based approach requires less computational resources than exhaustive search, which is used as a benchmark, and results in higher performance compared to a Lipschitzian based optimization approach. The proposed method also requires fewer labeled images compared to other methods. Several exploration-exploitation strategies are compared, and the results indicate that the Decaying Epsilon-Greedy algorithm gives highest performance for this task. The highest performance with the Epsilon-Greedy algorithm ( ϵ = 0.7) reached 87% of the performance achieved by exhaustive search, with 50% fewer iterations than the benchmark. The performance increased to 91.5% using Decaying Epsilon-Greedy algorithm, with 73% less number of iterations than the benchmark. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Figures

Figure 1

Open AccessArticle Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics
Robotics 2018, 7(1), 12; doi:10.3390/robotics7010012
Received: 2 January 2018 / Revised: 19 February 2018 / Accepted: 22 February 2018 / Published: 26 February 2018
PDF Full-text (3493 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a
[...] Read more.
The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i) recognition of a “flat” signal; and (ii) track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist. Full article
Figures

Open AccessArticle Robust Composite High-Order Super-Twisting Sliding Mode Control of Robot Manipulators
Robotics 2018, 7(1), 13; doi:10.3390/robotics7010013
Received: 22 January 2018 / Revised: 18 February 2018 / Accepted: 23 February 2018 / Published: 1 March 2018
PDF Full-text (3336 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes the design of a robust composite high-order super-twisting sliding mode controller (HOSTSMC) for robot manipulators. Robot manipulators are extensively used in industrial manufacturing for many complex and specialized applications. These applications require robots with nonlinear mechanical architectures, resulting in multiple
[...] Read more.
This paper describes the design of a robust composite high-order super-twisting sliding mode controller (HOSTSMC) for robot manipulators. Robot manipulators are extensively used in industrial manufacturing for many complex and specialized applications. These applications require robots with nonlinear mechanical architectures, resulting in multiple control challenges in various applications. To address this issue, this paper focuses on designing a robust composite high-order super-twisting sliding mode controller by combining a higher-order super-twisting sliding mode controller as the main controller with a super-twisting higher-order sliding mode observer as unknown state measurement and uncertainty estimator in the presence of uncertainty. The proposed method adaptively improves the traditional sliding mode controller (TSMC) and the estimated state sliding mode controller (ESMC) to attenuate the chattering. The effectiveness of a HOSTSMC is tested over six degrees of freedom (DOF) using a Programmable Universal Manipulation Arm (PUMA) robot manipulator. The proposed method outperforms the TSMC and ESMC, yielding 4.9% and 2% average performance improvements in the output position root-mean-square (RMS) error and average error, respectively. Full article
(This article belongs to the Special Issue Intelligent Systems in Robotics)
Figures

Figure 1

Open AccessArticle An Underwater Image Enhancement Algorithm for Environment Recognition and Robot Navigation
Robotics 2018, 7(1), 14; doi:10.3390/robotics7010014
Received: 18 November 2017 / Revised: 1 March 2018 / Accepted: 6 March 2018 / Published: 13 March 2018
PDF Full-text (3636 KB) | HTML Full-text | XML Full-text
Abstract
There are many tasks that require clear and easily recognizable images in the field of underwater robotics and marine science, such as underwater target detection and identification of robot navigation and obstacle avoidance. However, water turbidity makes the underwater image quality too low
[...] Read more.
There are many tasks that require clear and easily recognizable images in the field of underwater robotics and marine science, such as underwater target detection and identification of robot navigation and obstacle avoidance. However, water turbidity makes the underwater image quality too low to recognize. This paper proposes the use of the dark channel prior model for underwater environment recognition, in which underwater reflection models are used to obtain enhanced images. The proposed approach achieves very good performance and multi-scene robustness by combining the dark channel prior model with the underwater diffuse model. The experimental results are given to show the effectiveness of the dark channel prior model in underwater scenarios. Full article
Figures

Figure 1

Review

Jump to: Editorial, Research, Other

Open AccessReview Bioenergy Based Power Sources for Mobile Autonomous Robots
Robotics 2018, 7(1), 2; doi:10.3390/robotics7010002
Received: 16 November 2017 / Revised: 21 December 2017 / Accepted: 27 December 2017 / Published: 1 January 2018
PDF Full-text (560 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents the problem of application of modern developments in the field of bio-energy for the development of autonomous mobile robots’ power sources. We carried out analysis of biofuel cells, gasification and pyrolysis of biomass. Nowadays, very few technologies in the bioenergy
[...] Read more.
This paper presents the problem of application of modern developments in the field of bio-energy for the development of autonomous mobile robots’ power sources. We carried out analysis of biofuel cells, gasification and pyrolysis of biomass. Nowadays, very few technologies in the bioenergy field are conducted with regards to the demands brought by robotics. At the same time, a number of technologies, such as biofuel cells, have now already come into use as a power supply for experimental autonomous mobile robots. The general directions for research that may help to increase the efficiency of power energy sources described in the article, in case of their use in robotics, are also presented. Full article
Figures

Figure 1

Other

Jump to: Editorial, Research, Review

Open AccessCommentary Technology Acceptance and User-Centred Design of Assistive Exoskeletons for Older Adults: A Commentary
Robotics 2018, 7(1), 3; doi:10.3390/robotics7010003
Received: 5 September 2017 / Revised: 15 December 2017 / Accepted: 27 December 2017 / Published: 3 January 2018
PDF Full-text (3421 KB) | HTML Full-text | XML Full-text
Abstract
Assistive robots are emerging as technologies that enable older adults to perform activities of daily living with autonomy. Exoskeletons are a subset of assistive robots that can support mobility. Perceptions and acceptance of these technologies require understanding in a user-centred design context to
[...] Read more.
Assistive robots are emerging as technologies that enable older adults to perform activities of daily living with autonomy. Exoskeletons are a subset of assistive robots that can support mobility. Perceptions and acceptance of these technologies require understanding in a user-centred design context to ensure optimum experience and adoption by as broad a spectrum of older adults as possible. The adoption and use of assistive robots for activities of daily living (ADL) by older adults is poorly understood. Older adult acceptance of technology is affected by numerous factors, such as perceptions and stigma associated with dependency and ageing. Assistive technology (AT) models provide theoretical frameworks that inform decision-making in relation to assistive devices for people with disabilities. However, technology acceptance models (TAMs) are theoretical explanations of factors that influence why users adopt some technologies and not others. Recent models have emerged specifically describing technology acceptance by older adults. In the context of exoskeleton design, these models could influence design approaches. This article will discuss a selection of TAMs, displaying a chronology that highlights their evolution, and two prioritised TAMs—Almere and the senior technology acceptance model (STAM)—that merit consideration when attempting to understand acceptance and use of assistive robots by older adults. Full article
Figures

Figure 1

Back to Top