Previous Issue

Table of Contents

Robotics, Volume 6, Issue 3 (September 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Automated Assembly Using 3D and 2D Cameras
Robotics 2017, 6(3), 14; doi:10.3390/robotics6030014
Received: 31 March 2017 / Revised: 20 May 2017 / Accepted: 19 June 2017 / Published: 27 June 2017
PDF Full-text (21136 KB) | HTML Full-text | XML Full-text
Abstract
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a
[...] Read more.
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to < 1 and < 1 . This was demonstrated in a real industrial assembly task where high accuracy is required. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Open AccessArticle Compressed Voxel-Based Mapping Using Unsupervised Learning
Robotics 2017, 6(3), 15; doi:10.3390/robotics6030015
Received: 11 May 2017 / Revised: 20 June 2017 / Accepted: 26 June 2017 / Published: 29 June 2017
PDF Full-text (1059 KB) | HTML Full-text | XML Full-text
Abstract
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective
[...] Read more.
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Open AccessArticle Perception-Link Behavior Model: Supporting a Novel Operator Interface for a Customizable Anthropomorphic Telepresence Robot
Robotics 2017, 6(3), 16; doi:10.3390/robotics6030016
Received: 15 May 2017 / Revised: 14 July 2017 / Accepted: 15 July 2017 / Published: 20 July 2017
PDF Full-text (4359 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
A customizable anthropomorphic telepresence robot (CATR) is an emerging medium that might have the highest degree of social presence among the existing mediated communication mediums. Unfortunately, there are problems with teleoperating a CATR, and these problems can deteriorate the gesture motion in a
[...] Read more.
A customizable anthropomorphic telepresence robot (CATR) is an emerging medium that might have the highest degree of social presence among the existing mediated communication mediums. Unfortunately, there are problems with teleoperating a CATR, and these problems can deteriorate the gesture motion in a CATR. These problems are the disruption during decoupling, discontinuity due to the unstable transmission and jerkiness due to the reactive collision avoidance. From the review, none of the existing interfaces can simultaneously fix all of the problems. Hence, a novel framework with the perception-link behavior model (PLBM) was proposed. The PLBM adopts the distributed spatiotemporal representation for all of its input signals. Equipping it with other components, the PLBM can solve the above problems with some limitations. For instance, the PLBM can retrieve missing modalities from its experience during decoupling. Next, the PLBM can handle up to a high level of drop rate in the network connection because it is dealing with gesture style and not pose. For collision prevention, the PLBM can tune the incoming gesture style so that the CATR can deliberately and smoothly avoid a collision. In summary, the framework consists of PLBM being able to increase the user’s presence on a CATR by synthesizing expressive user gestures. Full article
Figures

Figure 1

Open AccessArticle Trajectory Planning and Tracking Control of a Differential-Drive Mobile Robot in a Picture Drawing Application
Robotics 2017, 6(3), 17; doi:10.3390/robotics6030017
Received: 21 June 2017 / Revised: 7 August 2017 / Accepted: 7 August 2017 / Published: 10 August 2017
PDF Full-text (4068 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a method for trajectory planning and control of a mobile robot for application in picture drawing from images. The robot is an accurate differential drive mobile robot platform controlled by a field-programmable-gate-array (FPGA) controller. By not locating the tip of
[...] Read more.
This paper proposes a method for trajectory planning and control of a mobile robot for application in picture drawing from images. The robot is an accurate differential drive mobile robot platform controlled by a field-programmable-gate-array (FPGA) controller. By not locating the tip of the pen at the middle between two wheels, we are able to construct an omnidirectional mobile platform, thus implementing a simple and effective trajectory control method. The reference trajectories are generated based on line simplification and B-spline approximation of digitized input curves obtained from Canny’s edge-detection algorithm on a gray image. Experimental results for image picture drawing show the advantage of this proposed method. Full article
Figures

Figure 1

Open AccessArticle Automated Remote Insect Surveillance at a Global Scale and the Internet of Things
Robotics 2017, 6(3), 19; doi:10.3390/robotics6030019
Received: 26 May 2017 / Revised: 11 August 2017 / Accepted: 21 August 2017 / Published: 22 August 2017
PDF Full-text (2696 KB) | HTML Full-text | XML Full-text
Abstract
Τhe concept of remote insect surveillance at large spatial scales for many serious insect pests of agricultural and medical importance has been introduced in a series of our papers. We augment typical, low-cost plastic traps for many insect pests with the necessary optoelectronic
[...] Read more.
Τhe concept of remote insect surveillance at large spatial scales for many serious insect pests of agricultural and medical importance has been introduced in a series of our papers. We augment typical, low-cost plastic traps for many insect pests with the necessary optoelectronic sensors to guard the entrance of the trap to detect, time-stamp, GPS tag, and—in relevant cases—identify the species of the incoming insect from their wingbeat. For every important crop pest, there are monitoring protocols to be followed to decide when to initiate a treatment procedure before a serious infestation occurs. Monitoring protocols are mainly based on specifically designed insect traps. Traditional insect monitoring suffers in that the scope of such monitoring: is curtailed by its cost, requires intensive labor, is time consuming, and an expert is often needed for sufficient accuracy which can sometimes raise safety issues for humans. These disadvantages reduce the extent to which manual insect monitoring is applied and therefore its accuracy, which finally results in significant crop loss due to damage caused by pests. With the term ‘surveillance’ we intend to push the monitoring idea to unprecedented levels of information extraction regarding the presence, time-stamping detection events, species identification, and population density of targeted insect pests. Insect counts, as well as environmental parameters that correlate with insects’ population development, are wirelessly transmitted to the central monitoring agency in real time and are visualized and streamed to statistical methods to assist enforcement of security control to insect pests. In this work, we emphasize how the traps can be self-organized in networks that collectively report data at local, regional, country, continental, and global scales using the emerging technology of the Internet of Things (IoT). This research is necessarily interdisciplinary and falls at the intersection of entomology, optoelectronic engineering, data-science, and crop science and encompasses the design and implementation of low-cost, low-power technology to help reduce the extent of quantitative and qualitative crop losses by many of the most significant agricultural pests. We argue that smart traps communicating through IoT to report in real-time the level of the pest population from the field straight to a human controlled agency can, in the very near future, have a profound impact on the decision-making process in crop protection and will be disruptive of existing manual practices. In the present study, three cases are investigated: monitoring Rhynchophorus ferrugineus (Olivier) (Coleoptera: Curculionidae) using (a) Picusan and (b) Lindgren trap; and (c) monitoring various stored grain beetle pests using the stored-grain pitfall trap. Our approach is very accurate, reaching 98–99% accuracy on automatic counts compared with real detected numbers of insects in each type of trap. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Figures

Figure 1

Open AccessArticle Proposal of Novel Model for a 2 DOF Exoskeleton for Lower-Limb Rehabilitation
Robotics 2017, 6(3), 20; doi:10.3390/robotics6030020
Received: 21 June 2017 / Revised: 13 August 2017 / Accepted: 31 August 2017 / Published: 7 September 2017
PDF Full-text (12905 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, engineering is working side by side with medical sciences to design and create devices which could help to improve medical processes. Physiotherapy is one of the areas of medicine in which engineering is working. There, several devices aimed to enhance and assist
[...] Read more.
Nowadays, engineering is working side by side with medical sciences to design and create devices which could help to improve medical processes. Physiotherapy is one of the areas of medicine in which engineering is working. There, several devices aimed to enhance and assist therapies are being studied and developed. Mechanics and electronics engineering together with physiotherapy are developing exoskeletons, which are electromechanical devices attached to limbs which could help the user to move or correct the movement of the given limbs, providing automatic therapies with flexible and configurable programs to improve the autonomy and fit the needs of each patient. Exoskeletons can enhance the effectiveness of physiotherapy and reduce patient rehabilitation time. As a contribution, this paper proposes a dynamic model for two degrees of freedom (2 DOF) leg exoskeleton acting over the knee and ankle to treat people with partial disability in lower limbs. This model has the advantage that it can be adapted for any person using the variables of mass and height, converting it into a flexible alternative for calculating the exoskeleton dynamics very quickly and adapting them easily for a child’s or young adult’s body. In addition, this paper includes the linearization of the model and an analysis of its respective observability and controllability, as preliminary study for control strategies applications. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Building a ROS-Based Testbed for Realistic Multi-Robot Simulation: Taking the Exploration as an Example
Robotics 2017, 6(3), 21; doi:10.3390/robotics6030021
Received: 9 August 2017 / Revised: 8 September 2017 / Accepted: 11 September 2017 / Published: 12 September 2017
PDF Full-text (733 KB) | HTML Full-text | XML Full-text
Abstract
While the robotics community agrees that the benchmarking is of high importance to objectively compare different solutions, there are only few and limited tools to support it. To address this issue in the context of multi-robot systems, we have defined a benchmarking process
[...] Read more.
While the robotics community agrees that the benchmarking is of high importance to objectively compare different solutions, there are only few and limited tools to support it. To address this issue in the context of multi-robot systems, we have defined a benchmarking process based on experimental designs, which aimed at improving the reproducibility of experiments by making explicit all elements of a benchmark such as parameters, measurements and metrics. We have also developed a ROS (Robot Operating System)-based testbed with the goal of making it easy for users to validate, benchmark, and compare different algorithms including coordination strategies. Our testbed uses the MORSE (Modular OpenRobots Simulation Engine) simulator for realistic simulation and a computer cluster for decentralized computation. In this paper, we present our testbed in details with the architecture and infrastructure, the issues encountered in implementing the infrastructure, and the automation of the deployment. We also report a series of experiments on multi-robot exploration, in order to demonstrate the capabilities of our testbed. Full article
Figures

Review

Jump to: Research

Open AccessReview Application of Augmented Reality and Robotic Technology in Broadcasting: A Survey
Robotics 2017, 6(3), 18; doi:10.3390/robotics6030018
Received: 26 May 2017 / Revised: 21 July 2017 / Accepted: 7 August 2017 / Published: 17 August 2017
PDF Full-text (2843 KB) | HTML Full-text | XML Full-text
Abstract
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in
[...] Read more.
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Journal Contact

MDPI AG
Robotics Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Robotics Edit a special issue Review for Robotics
logo
loading...
Back to Top