MDPI Contact

MDPI AG
St. Alban-Anlage 66,
4052 Basel, Switzerland
Support contact
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18

For more contact information, see here.

Search Results

99 articles matched your search query. Search Parameters:
Journal = robotics
View options
order results:
result details:
results per page:
Articles per page View Sort by
Displaying article 1-50 on page 1 of 2.
Go to page 1 2 >
Export citation of selected articles as:
Open AccessArticle Compressed Voxel-Based Mapping Using Unsupervised Learning
Robotics 2017, 6(3), 15; doi:10.3390/robotics6030015
Received: 11 May 2017 / Revised: 20 June 2017 / Accepted: 26 June 2017 / Published: 29 June 2017
Viewed by 553 | PDF Full-text (1059 KB) | HTML Full-text | XML Full-text
Abstract
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective
[...] Read more.
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Open AccessArticle Automated Assembly Using 3D and 2D Cameras
Robotics 2017, 6(3), 14; doi:10.3390/robotics6030014
Received: 31 March 2017 / Revised: 20 May 2017 / Accepted: 19 June 2017 / Published: 27 June 2017
Viewed by 198 | PDF Full-text (21136 KB) | HTML Full-text | XML Full-text
Abstract
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a
[...] Read more.
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to < 1 and < 1 . This was demonstrated in a real industrial assembly task where high accuracy is required. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Open AccessArticle Augmented Reality Guidance with Multimodality Imaging Data and Depth-Perceived Interaction for Robot-Assisted Surgery
Robotics 2017, 6(2), 13; doi:10.3390/robotics6020013
Received: 30 March 2017 / Revised: 18 May 2017 / Accepted: 22 May 2017 / Published: 24 May 2017
Viewed by 612 | PDF Full-text (6090 KB) | HTML Full-text | XML Full-text
Abstract
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical
[...] Read more.
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical system with interactive surgical guidance using tablet-based AR with a Kinect sensor for three-dimensional (3D) localization of patient anatomical structures and intraoperative 3D surgical tool navigation. Depth data acquired from the Kinect sensor was visualized in cone-shaped layers for 3D AR-assisted navigation. Virtual visual cues generated by the tablet were overlaid on the images of the surgical field for spatial reference. We evaluated the proposed system and the experimental results showed that the tablet-based visual guidance system could assist surgeons in locating internal organs, with errors between 1.74 and 2.96 mm. We also demonstrated that the system was able to provide mobile augmented guidance and interaction for surgical tool navigation. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Open AccessArticle Bin-Dog: A Robotic Platform for Bin Management in Orchards
Robotics 2017, 6(2), 12; doi:10.3390/robotics6020012
Received: 1 April 2017 / Revised: 10 May 2017 / Accepted: 18 May 2017 / Published: 22 May 2017
Viewed by 635 | PDF Full-text (7364 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management,
[...] Read more.
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management, the concept of a robotic bin-dog system is proposed in this study. This system is designed with a “go-over-the-bin” feature, which allows it to drive over bins between tree rows and complete the above process in one trip. To validate this system concept, a prototype and its control and navigation system were designed and built. Field tests were conducted in a commercial orchard to validate its key functionalities in three tasks including headland turning, straight-line tracking between tree rows, and “go-over-the-bin.” Tests of the headland turning showed that bin-dog followed a predefined path to align with an alleyway with lateral and orientation errors of 0.02 m and 1.5°. Tests of straight-line tracking showed that bin-dog could successfully track the alleyway centerline at speeds up to 1.00 m·s−1 with a RMSE offset of 0.07 m. The navigation system also successfully guided the bin-dog to complete the task of go-over-the-bin at a speed of 0.60 m·s−1. The successful validation tests proved that the prototype can achieve all desired functionality. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Figures

Figure 1

Open AccessArticle Feasibility of Using the Optical Sensing Techniques for Early Detection of Huanglongbing in Citrus Seedlings
Robotics 2017, 6(2), 11; doi:10.3390/robotics6020011
Received: 6 January 2017 / Revised: 10 April 2017 / Accepted: 19 April 2017 / Published: 23 April 2017
Viewed by 782 | PDF Full-text (1593 KB) | HTML Full-text | XML Full-text
Abstract
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant
[...] Read more.
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant impact on citrus yield and quality in Florida. Unfortunately, no cure has been reported for HLB. Starch accumulates in HLB infected leaf chloroplasts, which causes the mottled blotchy green pattern. Starch rotates the polarization plane of light. A polarized imaging technique was used to detect the polarization-rotation caused by the hyper-accumulation of starch as a pre-symptomatic indication of HLB in young seedlings. Citrus seedlings were grown in a room with controlled conditions and exposed to intensive feeding by CLas-positive psyllids for eight weeks. A quantitative polymerase chain reaction was employed to confirm the HLB status of samples. Two datasets were acquired; the first created one month after the exposer to psyllids and the second two months later. The results showed that, with relatively unsophisticated imaging equipment, four levels of HLB infections could be detected with accuracies of 72%–81%. As expected, increasing the time interval between psyllid exposure and imaging increased the development of symptoms and, accordingly, improved the detection accuracy. Full article
(This article belongs to the Special Issue Agriculture Robotics)
Figures

Figure 1

Open AccessArticle Binaural Range Finding from Synthetic Aperture Computation as the Head is Turned
Robotics 2017, 6(2), 10; doi:10.3390/robotics6020010
Received: 9 January 2017 / Revised: 22 March 2017 / Accepted: 14 April 2017 / Published: 19 April 2017
Viewed by 685 | PDF Full-text (1514 KB) | HTML Full-text | XML Full-text
Abstract
A solution to binaural direction finding described in Tamsett (Robotics 2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one
[...] Read more.
A solution to binaural direction finding described in Tamsett (Robotics 2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one and the method extended for SAC as a function of range for estimating range to an acoustic source. An instantaneous angle λ (lambda) between the auditory axis and direction to an acoustic source locates the source on a small circle of colatitude (lambda circle) of a sphere symmetric about the auditory axis. As the head is turned, data over successive instantaneous lambda circles are integrated in a virtual field of audition from which the direction to an acoustic source can be inferred. Multiple sets of lambda circles generated as a function of range yield an optimal range at which the circles intersect to best focus at a point in a virtual three-dimensional field of audition, providing an estimate of range. A proof of concept is demonstrated using simulated experimental data. The method enables a binaural robot to estimate not only direction but also range to an acoustic source from sufficiently accurate measurements of arrival time/level differences at the antennae. Full article
Figures

Figure 1

Open AccessArticle Visual Place Recognition for Autonomous Mobile Robots
Robotics 2017, 6(2), 9; doi:10.3390/robotics6020009
Received: 14 March 2017 / Revised: 10 April 2017 / Accepted: 12 April 2017 / Published: 17 April 2017
Viewed by 790 | PDF Full-text (12706 KB) | HTML Full-text | XML Full-text
Abstract
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We
[...] Read more.
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We compare different approaches for visual place recognition: holistic methods (visual compass and warping), signature-based methods (using Fourier coefficients or feature descriptors (able for binary-appearance loop-closure evaluation, ABLE)), and feature-based methods (fast appearance-based mapping, FabMap). As new contributions we investigate whether warping, a successful visual homing method, is suitable for place recognition. In addition, we extend the well-known visual compass to use multiple scale planes, a concept also employed by warping. To achieve tolerance against changing illumination conditions, we examine the NSAD distance measure (normalized sum of absolute differences) on edge-filtered images. To reduce the impact of illumination changes on the distance values, we suggest to compute ratios of image distances to normalize these values to a common range. We test all methods on multiple indoor databases, as well as a small outdoor database, using images with constant or changing illumination conditions. ROC analysis (receiver-operator characteristics) and the metric distance between best-matching image pairs are used as evaluation measures. Most methods perform well under constant illumination conditions, but fail under changing illumination. The visual compass using the NSAD measure on edge-filtered images with multiple scale planes, while being slower than signature methods, performs best in the latter case. Full article
Figures

Figure 1

Open AccessArticle Robot-Assisted Crowd Evacuation under Emergency Situations: A Survey
Robotics 2017, 6(2), 8; doi:10.3390/robotics6020008
Received: 27 December 2016 / Revised: 27 March 2017 / Accepted: 31 March 2017 / Published: 7 April 2017
Cited by 1 | Viewed by 672 | PDF Full-text (1943 KB) | HTML Full-text | XML Full-text
Abstract
In the case of emergency situations, robotic systems can play a key role and save human lives in recovery and evacuation operations. To realize such a potential, we have to address many scientific and technical challenges encountered during robotic search and rescue missions.
[...] Read more.
In the case of emergency situations, robotic systems can play a key role and save human lives in recovery and evacuation operations. To realize such a potential, we have to address many scientific and technical challenges encountered during robotic search and rescue missions. This paper reviews current state-of-the-art robotic technologies that have been deployed in the simulation of crowd evacuation, including both macroscopic and microscopic models used in simulating a crowd. Existing work on crowd simulation is analyzed and the robots used in crowd evacuation are introduced. Finally, the paper demonstrates how autonomous robots could be effectively deployed in disaster evacuation, as well as search and rescue missions. Full article
Figures

Figure 1

Open AccessArticle An Optimal and Energy Efficient Multi-Sensor Collision-Free Path Planning Algorithm for a Mobile Robot in Dynamic Environments
Robotics 2017, 6(2), 7; doi:10.3390/robotics6020007
Received: 4 December 2016 / Revised: 24 March 2017 / Accepted: 28 March 2017 / Published: 31 March 2017
Viewed by 807 | PDF Full-text (6381 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
There has been a remarkable growth in many different real-time systems in the area of autonomous mobile robots. This paper focuses on the collaboration of efficient multi-sensor systems to create new optimal motion planning for mobile robots. A proposed algorithm is used based
[...] Read more.
There has been a remarkable growth in many different real-time systems in the area of autonomous mobile robots. This paper focuses on the collaboration of efficient multi-sensor systems to create new optimal motion planning for mobile robots. A proposed algorithm is used based on a new model to produce the shortest and most energy-efficient path from a given initial point to a goal point. The distance and time traveled, in addition to the consumed energy, have an asymptotic complexity of O(nlogn), where n is the number of obstacles. Real time experiments are performed to demonstrate the accuracy and energy efficiency of the proposed motion planning algorithm. Full article
Figures

Figure 1

Open AccessArticle A New Combined Vision Technique for Micro Aerial Vehicle Pose Estimation
Robotics 2017, 6(2), 6; doi:10.3390/robotics6020006
Received: 5 December 2016 / Revised: 20 February 2017 / Accepted: 23 March 2017 / Published: 28 March 2017
Viewed by 682 | PDF Full-text (4718 KB) | HTML Full-text | XML Full-text
Abstract
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are
[...] Read more.
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are considered simultaneously by the particle filter framework to improve the accuracy of visual positioning. The framework, which is driven by an onboard inertial module, takes the positioning results from the visual system as measurements and updates the vehicle state. Moreover, experimental testing and data analysis have been carried out to verify the proposed algorithm, including multi-camera configuration, design and assembly of MAV systems, and the marker detection and matching between different views. Our results indicated that the combined vision technique is very attractive for high-performance MAV pose estimation. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Open AccessArticle Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing
Robotics 2017, 6(1), 5; doi:10.3390/robotics6010005
Received: 30 November 2016 / Revised: 7 March 2017 / Accepted: 14 March 2017 / Published: 17 March 2017
Viewed by 899 | PDF Full-text (7162 KB) | HTML Full-text | XML Full-text
Abstract
Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation
[...] Read more.
Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Open AccessArticle Robot-Assisted Therapy for Learning and Social Interaction of Children with Autism Spectrum Disorder
Robotics 2017, 6(1), 4; doi:10.3390/robotics6010004
Received: 1 December 2016 / Revised: 6 March 2017 / Accepted: 7 March 2017 / Published: 14 March 2017
Viewed by 817 | PDF Full-text (1668 KB) | HTML Full-text | XML Full-text
Abstract
This paper puts forward the potential for designing a parrot-inspired robot and an indirect teaching technique, the adapted model-rival method (AMRM), to help improve learning and social interaction abilities of children with autism spectrum disorder. The AMRM was formulated by adapting two popular
[...] Read more.
This paper puts forward the potential for designing a parrot-inspired robot and an indirect teaching technique, the adapted model-rival method (AMRM), to help improve learning and social interaction abilities of children with autism spectrum disorder. The AMRM was formulated by adapting two popular conventional approaches, namely, model-rival method and label-training procedure. In our validation trials, we used a semi-autonomous parrot-inspired robot, called KiliRo, to simulate a set of autonomous behaviors. A proposed robot-assisted therapy using AMRM was pilot tested with nine children with autism spectrum disorder for five consecutive days in a clinical setting. We analyzed the facial expressions of children when they interacted with KiliRo using an automated emotion recognition and classification system, Oxford emotion API (Application Programming Interface). Results provided some indication that the children with autism spectrum disorder appeared attracted and happy to interact with the parrot-inspired robot. Short qualitative interviews with the children’s parents, the pediatrician, and the child psychologist who participated in this pilot study, also acknowledged that the proposed parrot-inspired robot and the AMRM may have some merit in aiding in improving learning and social interaction abilities of children with autism spectrum disorder. Full article
Figures

Figure 1

Open AccessArticle Synthetic Aperture Computation as the Head is Turned in Binaural Direction Finding
Robotics 2017, 6(1), 3; doi:10.3390/robotics6010003
Received: 28 December 2016 / Revised: 28 February 2017 / Accepted: 9 March 2017 / Published: 12 March 2017
Cited by 1 | Viewed by 665 | PDF Full-text (2759 KB) | HTML Full-text | XML Full-text
Abstract
Binaural systems measure instantaneous time/level differences between acoustic signals received at the ears to determine angles λ between the auditory axis and directions to acoustic sources. An angle λ locates a source on a small circle of colatitude (a lamda circle) on a
[...] Read more.
Binaural systems measure instantaneous time/level differences between acoustic signals received at the ears to determine angles λ between the auditory axis and directions to acoustic sources. An angle λ locates a source on a small circle of colatitude (a lamda circle) on a sphere symmetric about the auditory axis. As the head is turned while listening to a sound, acoustic energy over successive instantaneous lamda circles is integrated in a virtual/subconscious field of audition. The directions in azimuth and elevation to maxima in integrated acoustic energy, or to points of intersection of lamda circles, are the directions to acoustic sources. This process in a robotic system, or in nature in a neural implementation equivalent to it, delivers its solutions to the aurally informed worldview. The process is analogous to migration applied to seismic profiler data, and to that in synthetic aperture radar/sonar systems. A slanting auditory axis, e.g., possessed by species of owl, leads to the auditory axis sweeping the surface of a cone as the head is turned about a single axis. Thus, the plane in which the auditory axis turns continuously changes, enabling robustly unambiguous directions to acoustic sources to be determined. Full article
Figures

Figure 1

Open AccessArticle Experimental and Simulation-Based Investigation of Polycentric Motion of an Inherent Compliant Pneumatic Bending Actuator with Skewed Rotary Elastic Chambers
Robotics 2017, 6(1), 2; doi:10.3390/robotics6010002
Received: 13 September 2016 / Revised: 6 January 2017 / Accepted: 17 January 2017 / Published: 25 January 2017
Viewed by 776 | PDF Full-text (13198 KB) | HTML Full-text | XML Full-text
Abstract
To offer a functionality that could not be found in traditional rigid robots, compliant actuators are in development worldwide for a variety of applications and especially for human–robot interaction. Pneumatic bending actuators are a special kind of such actuators. Due to the absence
[...] Read more.
To offer a functionality that could not be found in traditional rigid robots, compliant actuators are in development worldwide for a variety of applications and especially for human–robot interaction. Pneumatic bending actuators are a special kind of such actuators. Due to the absence of fixed mechanical axes and their soft behavior, these actuators generally possess a polycentric motion ability. This can be very useful to provide an implicit self-alignment to human joint axes in exoskeleton-like rehabilitation devices. As a possible realization, a novel bending actuator (BA) was developed using patented pneumatic skewed rotary elastic chambers (sREC). To analyze the actuator self-alignment properties, knowledge about the motion of this bending actuator type, the so-called skewed rotary elastic chambers bending actuator (sRECBA), is of high interest and this paper presents experimental and simulation-based kinematic investigations. First, to describe actuator motion, the finite helical axes (FHA) of basic actuator elements are determined using a three-dimensional (3D) camera system. Afterwards, a simplified two-dimensional (2D) kinematic simulation model based on a four-bar linkage was developed and the motion was compared to the experimental data by calculating the instantaneous center of rotation (ICR). The equivalent kinematic model of the sRECBA was realized using a series of four-bar linkages and the resulting ICR was analyzed in simulation. Finally, the FHA of the sRECBA were determined and analyzed for three different specific motions. The results show that the actuator’s FHA adapt to different motions performed and it can be assumed that implicit self-alignment to the polycentric motion of the human joint axis will be provided. Full article
Figures

Figure 1

Open AccessEditorial Acknowledgement to Reviewers of Robotics in 2016
Robotics 2017, 6(1), 1; doi:10.3390/robotics6010001
Received: 11 January 2017 / Revised: 11 January 2017 / Accepted: 11 January 2017 / Published: 11 January 2017
Viewed by 644 | PDF Full-text (269 KB) | HTML Full-text | XML Full-text
Abstract The editors of Robotics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...] Full article
Open AccessArticle Complete Coverage Path Planning for a Multi-UAV Response System in Post-Earthquake Assessment
Robotics 2016, 5(4), 26; doi:10.3390/robotics5040026
Received: 10 October 2016 / Revised: 11 November 2016 / Accepted: 17 November 2016 / Published: 2 December 2016
Viewed by 829 | PDF Full-text (2108 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a post-earthquake response system for a rapid damage assessment. In this system, multiple Unmanned Aerial Vehicles (UAVs) are deployed to collect the images from the earthquake site and create a response map for extracting useful information. It is an extension
[...] Read more.
This paper presents a post-earthquake response system for a rapid damage assessment. In this system, multiple Unmanned Aerial Vehicles (UAVs) are deployed to collect the images from the earthquake site and create a response map for extracting useful information. It is an extension of well-known coverage path problem (CPP) that is based on the grid pattern map decomposition. In addition to some linear strengthening techniques, two mathematic formulations, 4-index and 5-index models, are proposed in the approach and coded in GAMS (Cplex solver). They are tested on a number of problems and the results show that the 5-index model outperforms the 4-index model. Moreover, the proposed system could be significantly improved by the solver-generated cuts, additional constraints, and the variable branching priority extensions. Full article
Figures

Figure 1

Open AccessArticle Improving Robot Mobility by Combining Downward-Looking and Frontal Cameras
Robotics 2016, 5(4), 25; doi:10.3390/robotics5040025
Received: 4 October 2016 / Revised: 9 November 2016 / Accepted: 15 November 2016 / Published: 28 November 2016
Viewed by 943 | PDF Full-text (7032 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a novel attempt to combine a downward-looking camera and a forward-looking camera for terrain classification in the field of off-road mobile robots. The first camera is employed to identify the terrain beneath the robot. This information is then used to
[...] Read more.
This paper presents a novel attempt to combine a downward-looking camera and a forward-looking camera for terrain classification in the field of off-road mobile robots. The first camera is employed to identify the terrain beneath the robot. This information is then used to improve the classification of the forthcoming terrain acquired from the frontal camera. This research also shows the usefulness of the Gist descriptor for terrain classification purposes. Physical experiments conducted in different terrains (quasi-planar terrains) and different lighting conditions, confirm the satisfactory performance of this approach in comparison with a simple color-based classifier based only on frontal images. Our proposal substantially reduces the misclassification rate of the color-based classifier (∼10% versus ∼20%). Full article
Figures

Figure 1

Open AccessArticle A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms
Robotics 2016, 5(4), 24; doi:10.3390/robotics5040024
Received: 5 July 2016 / Revised: 31 October 2016 / Accepted: 4 November 2016 / Published: 9 November 2016
Viewed by 814 | PDF Full-text (3072 KB) | HTML Full-text | XML Full-text
Abstract
Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective
[...] Read more.
Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community. Full article
(This article belongs to the Special Issue Robotics and 3D Vision)
Figures

Figure 1

Open AccessArticle Auto-Calibration Methods of Kinematic Parameters and Magnetometer Offset for the Localization of a Tracked Mobile Robot
Robotics 2016, 5(4), 23; doi:10.3390/robotics5040023
Received: 5 August 2016 / Revised: 10 October 2016 / Accepted: 27 October 2016 / Published: 1 November 2016
Viewed by 757 | PDF Full-text (9436 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes an automatic calibration procedure adopted to improve the localization of an outdoor mobile robot. The proposed algorithm estimates, by using an extended Kalman filter, the main kinematic parameters of the vehicles, such as the wheel radii and the wheelbase as
[...] Read more.
This paper describes an automatic calibration procedure adopted to improve the localization of an outdoor mobile robot. The proposed algorithm estimates, by using an extended Kalman filter, the main kinematic parameters of the vehicles, such as the wheel radii and the wheelbase as well as the magnetometer offset. Several trials have been performed to validate the proposed strategy on a tracked electrical mobile robot. The mobile robot is aimed to be adopted as a tool to help humanitarian demining operations. Full article
Figures

Figure 1

Open AccessArticle Deployment Environment for a Swarm of Heterogeneous Robots
Robotics 2016, 5(4), 22; doi:10.3390/robotics5040022
Received: 8 June 2016 / Revised: 10 September 2016 / Accepted: 26 September 2016 / Published: 26 October 2016
Viewed by 734 | PDF Full-text (3724 KB) | HTML Full-text | XML Full-text
Abstract
The objective of this work is to develop a framework that can deploy and provide coordination between multiple heterogeneous agents when a swarm robotic system adopts a decentralized approach; each robot evaluates its relative rank among the other robots in terms of travel
[...] Read more.
The objective of this work is to develop a framework that can deploy and provide coordination between multiple heterogeneous agents when a swarm robotic system adopts a decentralized approach; each robot evaluates its relative rank among the other robots in terms of travel distance and cost to the goal. Accordingly, robots are allocated to the sub-tasks for which they have the highest rank (utility). This paper provides an analysis of existing swarm control environments and proposes a software environment that facilitates a rapid deployment of multiple robotic agents. The framework (UBSwarm) exploits our utility-based task allocation algorithm. UBSwarm configures these robots and assigns the group of robots a particular task from a set of available tasks. Two major tasks have been introduced that show the performance of a robotic group. This robotic group is composed of heterogeneous agents. In the results, a premature example that has prior knowledge about the experiment shows whether or not the robots are able to accomplish the task. Full article
Figures

Figure 1

Open AccessArticle Towards an Explanation Generation System for Robots: Analysis and Recommendations
Robotics 2016, 5(4), 21; doi:10.3390/robotics5040021
Received: 10 May 2016 / Revised: 29 August 2016 / Accepted: 26 September 2016 / Published: 13 October 2016
Viewed by 824 | PDF Full-text (585 KB) | HTML Full-text | XML Full-text
Abstract
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpected observations and partial descriptions extracted from sensor observations. Existing explanation generation systems draw on ideas that can be mapped to a multidimensional space of system characteristics, defined by
[...] Read more.
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpected observations and partial descriptions extracted from sensor observations. Existing explanation generation systems draw on ideas that can be mapped to a multidimensional space of system characteristics, defined by distinctions, such as how they represent knowledge and if and how they reason with heuristic guidance. Instances in this multidimensional space corresponding to existing systems do not support all of the desired explanation generation capabilities for robots. We seek to address this limitation by thoroughly understanding the range of explanation generation capabilities and the interplay between the distinctions that characterize them. Towards this objective, this paper first specifies three fundamental distinctions that can be used to characterize many existing explanation generation systems. We explore and understand the effects of these distinctions by comparing the capabilities of two systems that differ substantially along these axes, using execution scenarios involving a robot waiter assisting in seating people and delivering orders in a restaurant. The second part of the paper uses this study to argue that the desired explanation generation capabilities corresponding to these three distinctions can mostly be achieved by exploiting the complementary strengths of the two systems that were explored. This is followed by a discussion of the capabilities related to other major distinctions to provide detailed recommendations for developing an explanation generation system for robots. Full article
Figures

Figure 1

Open AccessArticle Towards Bio-Inspired Chromatic Behaviours in Surveillance Robots
Robotics 2016, 5(4), 20; doi:10.3390/robotics5040020
Received: 30 June 2016 / Revised: 21 September 2016 / Accepted: 26 September 2016 / Published: 29 September 2016
Viewed by 855 | PDF Full-text (3960 KB) | HTML Full-text | XML Full-text
Abstract
The field of Robotics is ever growing at the same time as posing enormous challenges. Numerous works has been done in biologically inspired robotics emulating models, systems and elements of nature for the purpose of solving traditional robotics problems. Chromatic behaviours are abundant
[...] Read more.
The field of Robotics is ever growing at the same time as posing enormous challenges. Numerous works has been done in biologically inspired robotics emulating models, systems and elements of nature for the purpose of solving traditional robotics problems. Chromatic behaviours are abundant in nature across a variety of living species to achieve camouflage, signaling, and temperature regulation. The ability of these creatures to successfully blend in with their environment and communicate by changing their colour is the fundamental inspiration for our research work. In this paper, we present dwarf chameleon inspired chromatic behaviour in the context of an autonomous surveillance robot, “PACHONDHI”. In our experiments, we successfully validated the ability of the robot to autonomously change its colour in relation to the terrain that it is traversing for maximizing detectability to friendly security agents and minimizing exposure to hostile agents, as well as to communicate with fellow cooperating robots. Full article
Figures

Figure 1

Open AccessArticle Terrain Perception in a Shape Shifting Rolling-Crawling Robot
Robotics 2016, 5(4), 19; doi:10.3390/robotics5040019
Received: 30 July 2016 / Revised: 26 August 2016 / Accepted: 2 September 2016 / Published: 27 September 2016
Viewed by 939 | PDF Full-text (5167 KB) | HTML Full-text | XML Full-text
Abstract
Terrain perception greatly enhances the performance of robots, providing them with essential information on the nature of terrain being traversed. Several living beings in nature offer interesting inspirations which adopt different gait patterns according to nature of terrain. In this paper, we present
[...] Read more.
Terrain perception greatly enhances the performance of robots, providing them with essential information on the nature of terrain being traversed. Several living beings in nature offer interesting inspirations which adopt different gait patterns according to nature of terrain. In this paper, we present a novel terrain perception system for our bioinspired robot, Scorpio, to classify the terrain based on visual features and autonomously choose appropriate locomotion mode. Our Scorpio robot is capable of crawling and rolling locomotion modes, mimicking Cebrenus Rechenburgi, a member of the huntsman spider family. Our terrain perception system uses Speeded Up Robust Feature (SURF) description method along with color information. Feature extraction is followed by Bag of Word method (BoW) and Support Vector Machine (SVM) for terrain classification. Experiments were conducted with our Scorpio robot to establish the efficacy and validity of the proposed approach. In our experiments, we achieved a recognition accuracy of over 90% across four terrain types namely grass, gravel, wooden deck, and concrete. Full article
Figures

Figure 1

Open AccessArticle Bio-Inspired Vision-Based Leader-Follower Formation Flying in the Presence of Delays
Robotics 2016, 5(3), 18; doi:10.3390/robotics5030018
Received: 17 June 2016 / Revised: 1 August 2016 / Accepted: 11 August 2016 / Published: 18 August 2016
Viewed by 977 | PDF Full-text (6743 KB) | HTML Full-text | XML Full-text
Abstract
Flocking starlings at dusk are known for the mesmerizing and intricate shapes they generate, as well as how fluid these shapes change. They seem to do this effortlessly. Real-life vision-based flocking has not been achieved in micro-UAVs (micro Unmanned Aerial Vehicles) to date.
[...] Read more.
Flocking starlings at dusk are known for the mesmerizing and intricate shapes they generate, as well as how fluid these shapes change. They seem to do this effortlessly. Real-life vision-based flocking has not been achieved in micro-UAVs (micro Unmanned Aerial Vehicles) to date. Towards this goal, we make three contributions in this paper: (i) we used a computational approach to develop a bio-inspired architecture for vision-based Leader-Follower formation flying on two micro-UAVs. We believe that the minimal computational cost of the resulting algorithm makes it suitable for object detection and tracking during high-speed flocking; (ii) we show that provided delays in the control loop of a micro-UAV are below a critical value, Kalman filter-based estimation algorithms are not required to achieve Leader-Follower formation flying; (iii) unlike previous approaches, we do not use external observers, such as GPS signals or synchronized communication with flock members. These three contributions could be useful in achieving vision-based flocking in GPS-denied environments on computationally-limited agents. Full article
Figures

Figure 1

Open AccessArticle Estimation of Physical Human-Robot Interaction Using Cost-Effective Pneumatic Padding
Robotics 2016, 5(3), 17; doi:10.3390/robotics5030017
Received: 23 May 2016 / Revised: 5 August 2016 / Accepted: 8 August 2016 / Published: 16 August 2016
Viewed by 1295 | PDF Full-text (8734 KB) | HTML Full-text | XML Full-text
Abstract
The idea to use a cost-effective pneumatic padding for sensing of physical interaction between a user and wearable rehabilitation robots is not new, but until now there has not been any practical relevant realization. In this paper, we present a novel method to
[...] Read more.
The idea to use a cost-effective pneumatic padding for sensing of physical interaction between a user and wearable rehabilitation robots is not new, but until now there has not been any practical relevant realization. In this paper, we present a novel method to estimate physical human-robot interaction using a pneumatic padding based on artificial neural networks (ANNs). This estimation can serve as rough indicator of applied forces/torques by the user and can be applied for visual feedback about the user’s participation or as additional information for interaction controllers. Unlike common mostly very expensive 6-axis force/torque sensors (FTS), the proposed sensor system can be easily integrated in the design of physical human-robot interfaces of rehabilitation robots and adapts itself to the shape of the individual patient’s extremity by pressure changing in pneumatic chambers, in order to provide a safe physical interaction with high user’s comfort. This paper describes a concept of using ANNs for estimation of interaction forces/torques based on pressure variations of eight customized air-pad chambers. The ANNs were trained one-time offline using signals of a high precision FTS which is also used as reference sensor for experimental validation. Experiments with three different subjects confirm the functionality of the concept and the estimation algorithm. Full article
Figures

Figure 1

Open AccessArticle Room Volume Estimation Based on Ambiguity of Short-Term Interaural Phase Differences Using Humanoid Robot Head
Robotics 2016, 5(3), 16; doi:10.3390/robotics5030016
Received: 16 June 2016 / Revised: 14 July 2016 / Accepted: 14 July 2016 / Published: 21 July 2016
Viewed by 763 | PDF Full-text (5941 KB) | HTML Full-text | XML Full-text
Abstract
Humans can recognize approximate room size using only binaural audition. However, sound reverberation is not negligible in most environments. The reverberation causes temporal fluctuations in the short-term interaural phase differences (IPDs) of sound pressure. This study proposes a novel method for a binaural
[...] Read more.
Humans can recognize approximate room size using only binaural audition. However, sound reverberation is not negligible in most environments. The reverberation causes temporal fluctuations in the short-term interaural phase differences (IPDs) of sound pressure. This study proposes a novel method for a binaural humanoid robot head to estimate room volume. The method is based on the statistical properties of the short-term IPDs of sound pressure. The humanoid robot turns its head toward a sound source, recognizes the sound source, and then estimates the ego-centric distance by its stereovision. By interpolating the relations between room volume, average standard deviation, and ego-centric distance experimentally obtained for various rooms in a prepared database, the room volume was estimated by the binaural audition of the robot from the average standard deviation of the short-term IPDs at the estimated distance. Full article
Figures

Open AccessReview Biomimetic Spider Leg Joints: A Review from Biomechanical Research to Compliant Robotic Actuators
Robotics 2016, 5(3), 15; doi:10.3390/robotics5030015
Received: 30 May 2016 / Revised: 6 July 2016 / Accepted: 7 July 2016 / Published: 15 July 2016
Viewed by 1142 | PDF Full-text (3981 KB) | HTML Full-text | XML Full-text
Abstract
Due to their inherent compliance, soft actuated joints are becoming increasingly important for robotic applications, especially when human-robot-interactions are expected. Several of these flexible actuators are inspired by biological models. One perfect showpiece for biomimetic robots is the spider leg, because it combines
[...] Read more.
Due to their inherent compliance, soft actuated joints are becoming increasingly important for robotic applications, especially when human-robot-interactions are expected. Several of these flexible actuators are inspired by biological models. One perfect showpiece for biomimetic robots is the spider leg, because it combines lightweight design and graceful movements with powerful and dynamic actuation. Building on this motivation, the review article focuses on compliant robotic joints inspired by the function principle of the spider leg. The mechanism is introduced by an overview of existing biological and biomechanical research. Thereupon a classification of robots that are bio-inspired by spider joints is presented. Based on this, the biomimetic robot applications referring to the spider principle are identified and discussed. Full article
Figures

Figure 1

Open AccessReview A Survey of Wall Climbing Robots: Recent Advances and Challenges
Robotics 2016, 5(3), 14; doi:10.3390/robotics5030014
Received: 22 March 2016 / Revised: 18 June 2016 / Accepted: 27 June 2016 / Published: 1 July 2016
Viewed by 1072 | PDF Full-text (222 KB) | HTML Full-text | XML Full-text
Abstract
In recent decades, skyscrapers, as represented by the Burj Khalifa in Dubai and Shanghai Tower in Shanghai, have been built due to the improvements of construction technologies. Even in such newfangled skyscrapers, the façades are generally cleaned by humans. Wall climbing robots, which
[...] Read more.
In recent decades, skyscrapers, as represented by the Burj Khalifa in Dubai and Shanghai Tower in Shanghai, have been built due to the improvements of construction technologies. Even in such newfangled skyscrapers, the façades are generally cleaned by humans. Wall climbing robots, which are capable of climbing up vertical surfaces, ceilings and roofs, are expected to replace the manual workforce in façade cleaning works, which is both hazardous and laborious work. Such tasks require these robotic platforms to possess high levels of adaptability and flexibility. This paper presents a detailed review of wall climbing robots categorizing them into six distinct classes based on the adhesive mechanism that they use. This paper concludes by expanding beyond adhesive mechanisms by discussing a set of desirable design attributes of an ideal glass façade cleaning robot towards facilitating targeted future research with clear technical goals and well-defined design trade-off boundaries. Full article
Open AccessArticle Trajectory Generation and Stability Analysis for Reconfigurable Klann Mechanism Based Walking Robot
Robotics 2016, 5(3), 13; doi:10.3390/robotics5030013
Received: 24 March 2016 / Revised: 22 June 2016 / Accepted: 22 June 2016 / Published: 30 June 2016
Viewed by 1079 | PDF Full-text (29419 KB) | HTML Full-text | XML Full-text
Abstract
Reconfigurable legged robots based on one degree of freedom are highly desired because they are effective on rough and irregular terrains and they provide mobility in such terrain with simple control schemes. It is necessary that reconfigurable legged robots should maintain stability during
[...] Read more.
Reconfigurable legged robots based on one degree of freedom are highly desired because they are effective on rough and irregular terrains and they provide mobility in such terrain with simple control schemes. It is necessary that reconfigurable legged robots should maintain stability during rest and motion, with a minimum number of legs while maintaining their full range of walking patterns resulting from different gait configuration. In this paper we present a method to generate input trajectory for reconfigurable quadruped robots based on Klann mechanism to properly synchronize movement. Six useful gait cycles based on this reconfigurable Klann mechanism for quadruped robots has been clearly shown here. The platform stability for these six useful gait cycles are validated through simulated results which clearly shows the capabilities of reconfigurable design. Full article
Figures

Open AccessArticle IDC Robocon: A Transnational Teaming Competition for Project-Based Design Education in Undergraduate Robotics
Robotics 2016, 5(3), 12; doi:10.3390/robotics5030012
Received: 2 April 2016 / Revised: 5 June 2016 / Accepted: 22 June 2016 / Published: 24 June 2016
Viewed by 967 | PDF Full-text (1369 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a robot design competition called ‘IDC Robocon’ as an effective tool for engineering education. The International Design Contest (IDC) Robocon competition has several benefits in creating a meaningful design experience for undergraduate engineering students and includes an international flavour as
[...] Read more.
This paper presents a robot design competition called ‘IDC Robocon’ as an effective tool for engineering education. The International Design Contest (IDC) Robocon competition has several benefits in creating a meaningful design experience for undergraduate engineering students and includes an international flavour as participants of the competition hail from all around the world. The problem posed to the contestants is to design, build and test mobile robots that are capable of accomplishing a task. A primary goal of the competition is to provide undergraduates with a meaningful design experience with an emphasis on mechanical design, electronic circuits and programming. It is hoped that by placing the emphasis on the design, the course will encourage more undergraduates to go into the field of engineering design. This paper presents the latest 2015 IDC Robocon (the 26th edition) in detail and discusses course of events and results in terms of the educational experience. In this competition, a simulated space problem of cleaning the debris from orbit is proposed for the latest IDC Robocon competition. Teams, comprising of students from multiple countries work together to develop robotic systems to compete with each other in collecting the foam balls and delivering them to the rotating the holder. Full article
Figures

Figure 1

Open AccessReview State of the Art Robotic Grippers and Applications
Robotics 2016, 5(2), 11; doi:10.3390/robotics5020011
Received: 12 March 2016 / Revised: 24 May 2016 / Accepted: 31 May 2016 / Published: 17 June 2016
Cited by 3 | Viewed by 1303 | PDF Full-text (254 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a recent survey on robotic grippers. In many cases, modern grippers outperform their older counterparts which are now stronger, more repeatable, and faster. Technological advancements have also attributed to the development of gripping various objects. This includes soft
[...] Read more.
In this paper, we present a recent survey on robotic grippers. In many cases, modern grippers outperform their older counterparts which are now stronger, more repeatable, and faster. Technological advancements have also attributed to the development of gripping various objects. This includes soft fabrics, microelectromechanical systems, and synthetic sheets. In addition, newer materials are being used to improve functionality of grippers, which include piezoelectric, shape memory alloys, smart fluids, carbon fiber, and many more. This paper covers the very first robotic gripper to the newest developments in grasping methods. Unlike other survey papers, we focus on the applications of robotic grippers in industrial, medical, for fragile objects and soft fabrics grippers. We report on new advancements on grasping mechanisms and discuss their behavior for different purposes. Finally, we present the future trends of grippers in terms of flexibility and performance and their vital applications in emerging areas of robotic surgery, industrial assembly, space exploration, and micromanipulation. These advancements will provide a future outlook on the new trends in robotic grippers. Full article
Open AccessArticle A Pinching Strategy for Fabrics Using Wiping Deformation
Robotics 2016, 5(2), 10; doi:10.3390/robotics5020010
Received: 3 February 2016 / Revised: 1 April 2016 / Accepted: 1 April 2016 / Published: 7 April 2016
Viewed by 1007 | PDF Full-text (1617 KB) | HTML Full-text | XML Full-text
Abstract
This paper discusses a strategy by which a robotic hand can use the physical properties of a fabric to pinch the fabric. Pinching may be accomplished by using a wiping motion, during which the movement and deformation of a deformable object occur simultaneously.
[...] Read more.
This paper discusses a strategy by which a robotic hand can use the physical properties of a fabric to pinch the fabric. Pinching may be accomplished by using a wiping motion, during which the movement and deformation of a deformable object occur simultaneously. The wiping motion differs from the displacement of a deformable object. During the wiping motion, there is contact, but no relative movement, between the manipulator and the object, whereas, during displacement, there is both contact and relative movement between the object and the floor. This paper first describes wiping motion and distinguishes wiping slide from wiping deformation by displacement of the internal points of an object. Wiping motion is also shown to be an extended scheme of pushing and sliding of rigid objects. Our strategy for pinching a fabric is accomplished with a combination of wiping deformation and residual deformation of the fabric under unloaded conditions. Using this strategy, a single-armed robotic hand can pinch both surfaces of the fabric without handover motion. Full article
Open AccessArticle Vibration Measurement in High Precision for Flexible Structure Based on Microscopic Vision
Robotics 2016, 5(2), 9; doi:10.3390/robotics5020009
Received: 13 January 2016 / Revised: 25 February 2016 / Accepted: 28 March 2016 / Published: 30 March 2016
Cited by 1 | Viewed by 878 | PDF Full-text (3567 KB) | HTML Full-text | XML Full-text
Abstract
Vibration measurement for flexible structures is widely used in various kinds of precision engineering fields. However, it is a challenge to measure vibration in special applications, such as cryogenic, dangerous and magnetic interference. In this paper, a high-precision vibration measurement system based on
[...] Read more.
Vibration measurement for flexible structures is widely used in various kinds of precision engineering fields. However, it is a challenge to measure vibration in special applications, such as cryogenic, dangerous and magnetic interference. In this paper, a high-precision vibration measurement system based on machine vision is designed. The circle center on the target is employed as the image feature. The circle feature is extracted using the improved algorithm based on gradient Hough transform. Then the image Jacobian matrix is used to compute the vibrations in Cartesian space from the image feature changes. Experiments verify the effectiveness of the proposed methods. Full article
Open AccessFeature PaperReview Extracting Semantic Information from Visual Data: A Survey
Robotics 2016, 5(1), 8; doi:10.3390/robotics5010008
Received: 17 December 2015 / Revised: 12 February 2016 / Accepted: 23 February 2016 / Published: 2 March 2016
Cited by 5 | Viewed by 1261 | PDF Full-text (8010 KB) | HTML Full-text | XML Full-text
Abstract
The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of
[...] Read more.
The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods. Full article
Open AccessFeature PaperArticle Soft Pneumatic Bending Actuator with Integrated Carbon Nanotube Displacement Sensor
Robotics 2016, 5(1), 7; doi:10.3390/robotics5010007
Received: 31 October 2015 / Revised: 6 January 2016 / Accepted: 17 February 2016 / Published: 24 February 2016
Cited by 2 | Viewed by 1683 | PDF Full-text (20199 KB) | HTML Full-text | XML Full-text
Abstract
The excellent compliance and large range of motion of soft actuators controlled by fluid pressure has lead to strong interest in applying devices of this type for biomimetic and human-robot interaction applications. However, in contrast to soft actuators fabricated from stretchable silicone materials,
[...] Read more.
The excellent compliance and large range of motion of soft actuators controlled by fluid pressure has lead to strong interest in applying devices of this type for biomimetic and human-robot interaction applications. However, in contrast to soft actuators fabricated from stretchable silicone materials, conventional technologies for position sensing are typically rigid or bulky and are not ideal for integration into soft robotic devices. Therefore, in order to facilitate the use of soft pneumatic actuators in applications where position sensing or closed loop control is required, a soft pneumatic bending actuator with an integrated carbon nanotube position sensor has been developed. The integrated carbon nanotube position sensor presented in this work is flexible and well suited to measuring the large displacements frequently encountered in soft robotics. The sensor is produced by a simple soft lithography process during the fabrication of the soft pneumatic actuator, with a greater than 30% resistance change between the relaxed state and the maximum displacement position. It is anticipated that integrated resistive position sensors using a similar design will be useful in a wide range of soft robotic systems. Full article
Open AccessArticle Application of the Naive Bayes Classifier for Representation and Use of Heterogeneous and Incomplete Knowledge in Social Robotics
Robotics 2016, 5(1), 6; doi:10.3390/robotics5010006
Received: 27 September 2015 / Revised: 18 January 2016 / Accepted: 14 February 2016 / Published: 22 February 2016
Viewed by 1284 | PDF Full-text (3766 KB) | HTML Full-text | XML Full-text
Abstract
As societies move towards integration of robots, it is important to study how robots can use their cognition in order to choose effectively their actions in a human environment, and possibly adapt to new contexts. When modelling these contextual data, it is common
[...] Read more.
As societies move towards integration of robots, it is important to study how robots can use their cognition in order to choose effectively their actions in a human environment, and possibly adapt to new contexts. When modelling these contextual data, it is common in social robotics to work with data extracted from human sciences such as sociology, anatomy, or anthropology. These heterogeneous data need to be efficiently used in order to make the robot adapt quickly its actions. In this paper we describe a methodology for the use of heterogeneous and incomplete knowledge, through an algorithm based on naive Bayes classifier. The model was successfully applied to two different experiments of human-robot interaction. Full article
Open AccessArticle Design and Implementation of a Control System for a Sailboat Robot
Robotics 2016, 5(1), 5; doi:10.3390/robotics5010005
Received: 14 November 2015 / Revised: 24 December 2015 / Accepted: 21 January 2016 / Published: 15 February 2016
Cited by 2 | Viewed by 1669 | PDF Full-text (6715 KB) | HTML Full-text | XML Full-text
Abstract
This article discusses a control architecture for autonomous sailboat navigation and also presents a sailboat prototype built for experimental validation of the proposed architecture. The main goal is to allow long endurance autonomous missions, such as ocean monitoring. As the system propulsion relies
[...] Read more.
This article discusses a control architecture for autonomous sailboat navigation and also presents a sailboat prototype built for experimental validation of the proposed architecture. The main goal is to allow long endurance autonomous missions, such as ocean monitoring. As the system propulsion relies on wind forces instead of motors, sailboat techniques are introduced and discussed, including the needed sensors, actuators and control laws. Mathematical modeling of the sailboat, as well as control strategies developed using PID and fuzzy controllers to control the sail and the rudder are also presented. Furthermore, we also present a study of the hardware architecture that enables the system overall performance to be increased. The sailboat movement can be planned through predetermined geographical way-points provided by a base station. Simulated and experimental results are presented to validate the control architecture, including tests performed on a lake. Underwater robotics can rely on such a platform by using it as a basis vessel, where autonomous charging of unmanned vehicles could be done or else as a relay surface base station for transmitting data. Full article
Open AccessArticle Sensor Fusion and Autonomy as a Powerful Combination for Biological Assessment in the Marine Environment
Robotics 2016, 5(1), 4; doi:10.3390/robotics5010004
Received: 30 September 2015 / Revised: 14 January 2016 / Accepted: 22 January 2016 / Published: 1 February 2016
Viewed by 1390 | PDF Full-text (1812 KB) | HTML Full-text | XML Full-text
Abstract
The ocean environment and the physical and biological processes that govern dynamics are complex. Sampling the ocean to better understand these processes is difficult given the temporal and spatial domains and sampling tools available. Biological systems are especially difficult as organisms possess behavior,
[...] Read more.
The ocean environment and the physical and biological processes that govern dynamics are complex. Sampling the ocean to better understand these processes is difficult given the temporal and spatial domains and sampling tools available. Biological systems are especially difficult as organisms possess behavior, operate at horizontal scales smaller than traditional shipboard sampling allows, and are often disturbed by the sampling platforms themselves. Sensors that measure biological processes have also generally not kept pace with the development of physical counterparts as their requirements are as complex as the target organisms. Here, we attempt to address this challenge by advocating the need for sensor-platform combinations to integrate and process data in real-time and develop data products that are useful in increasing sampling efficiencies. Too often, the data of interest is only garnered after post-processing after a sampling effort and the opportunity to use that information to guide sampling is lost. Here we demonstrate a new autonomous platform, where data are collected, analyzed, and data products are output in real-time to inform autonomous decision-making. This integrated capability allows for enhanced and informed sampling towards improving our understanding of the marine environment. Full article
(This article belongs to the Special Issue Underwater Robotics)
Open AccessEditorial Acknowledgement to Reviewers of Robotics in 2015
Robotics 2016, 5(1), 3; doi:10.3390/robotics5010003
Received: 22 January 2016 / Accepted: 22 January 2016 / Published: 22 January 2016
Viewed by 1041 | PDF Full-text (140 KB) | HTML Full-text | XML Full-text
Abstract The editors of Robotics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2015. [...] Full article
Open AccessArticle HBS-1: A Modular Child-Size 3D Printed Humanoid
Robotics 2016, 5(1), 1; doi:10.3390/robotics5010001
Received: 6 November 2015 / Revised: 23 December 2015 / Accepted: 4 January 2016 / Published: 13 January 2016
Cited by 4 | Viewed by 1588 | PDF Full-text (6141 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
An affordable, highly articulated, child-size humanoid robot could potentially be used for various purposes, widening the design space of humanoids for further study. Several findings indicated that normal children and children with autism interact well with humanoids. This paper presents a child-sized humanoid
[...] Read more.
An affordable, highly articulated, child-size humanoid robot could potentially be used for various purposes, widening the design space of humanoids for further study. Several findings indicated that normal children and children with autism interact well with humanoids. This paper presents a child-sized humanoid robot (HBS-1) intended primarily for children’s education and rehabilitation. The design approach is based on the design for manufacturing (DFM) and the design for assembly (DFA) philosophies to realize the robot fully using additive manufacturing. Most parts of the robot are fabricated with acrylonitrile butadiene styrene (ABS) using rapid prototyping technology. Servomotors and shape memory alloy actuators are used as actuating mechanisms. The mechanical design, analysis and characterization of the robot are presented in both theoretical and experimental frameworks. Full article
Open AccessArticle Coordination of Multiple Biomimetic Autonomous Underwater Vehicles Using Strategies Based on the Schooling Behaviour of Fish
Robotics 2016, 5(1), 2; doi:10.3390/robotics5010002
Received: 13 November 2015 / Revised: 21 December 2015 / Accepted: 28 December 2015 / Published: 13 January 2016
Cited by 1 | Viewed by 1216 | PDF Full-text (7749 KB) | HTML Full-text | XML Full-text
Abstract
Biomimetic Autonomous Underwater Vehicles (BAUVs) are Autonomous Underwater Vehicles (AUVs) that employ similar propulsion and steering principles as real fish. While the real life applicability of these vehicles has yet to be fully investigated, laboratory investigations have demonstrated that at low speeds, the
[...] Read more.
Biomimetic Autonomous Underwater Vehicles (BAUVs) are Autonomous Underwater Vehicles (AUVs) that employ similar propulsion and steering principles as real fish. While the real life applicability of these vehicles has yet to be fully investigated, laboratory investigations have demonstrated that at low speeds, the propulsive mechanism of these vehicles is more efficient when compared with propeller based AUVs. Furthermore, these vehicles have also demonstrated superior manoeuvrability characteristics when compared with conventional AUVs and Underwater Glider Systems (UGSs). Further performance benefits can be achieved through coordination of multiple BAUVs swimming in formation. In this study, the coordination strategy is based on the schooling behaviour of fish, which is a decentralized approach that allows multiple AUVs to be self-organizing. Such a strategy can be effectively utilized for large spatiotemporal data collection for oceanic monitoring and surveillance purposes. A validated mathematical model of the BAUV developed at the University of Glasgow, RoboSalmon, is used to represent the agents within a school formation. The performance of the coordination algorithm is assessed through simulation where system identification techniques are employed to improve simulation run time while ensuring accuracy is maintained. The simulation results demonstrate the effectiveness of implementing coordination algorithms based on the behavioural mechanisms of fish to allow a group of BAUVs to be considered self-organizing. Full article
(This article belongs to the Special Issue Underwater Robotics)
Open AccessConference Report Planning the Minimum Time and Optimal Survey Trajectory for Autonomous Underwater Vehicles in Uncertain Current
Robotics 2015, 4(4), 516-528; doi:10.3390/robotics4040516
Received: 10 November 2015 / Revised: 4 December 2015 / Accepted: 11 December 2015 / Published: 16 December 2015
Cited by 1 | Viewed by 970 | PDF Full-text (730 KB) | HTML Full-text | XML Full-text
Abstract
The authors develop an approach to a “best” time path for Autonomous Underwater Vehicles conducting oceanographic measurements under uncertain current flows. The numerical optimization tool DIDO is used to compute hybrid minimum time and optimal survey paths for a sample of currents between
[...] Read more.
The authors develop an approach to a “best” time path for Autonomous Underwater Vehicles conducting oceanographic measurements under uncertain current flows. The numerical optimization tool DIDO is used to compute hybrid minimum time and optimal survey paths for a sample of currents between ebb and flow. A simulated meta-experiment is performed where the vehicle traverses the resulting paths under different current strengths per run. The fastest elapsed time emerges from a payoff table. A multi-objective function is then used to weigh the time to complete a mission versus measurement inaccuracy due to deviation from the desired survey path. Full article
(This article belongs to the Special Issue Underwater Robotics)
Open AccessArticle Robust Design of Docking Hoop for Recovery of Autonomous Underwater Vehicle with Experimental Results
Robotics 2015, 4(4), 492-515; doi:10.3390/robotics4040492
Received: 24 July 2015 / Revised: 7 November 2015 / Accepted: 24 November 2015 / Published: 1 December 2015
Cited by 2 | Viewed by 1127 | PDF Full-text (1573 KB) | HTML Full-text | XML Full-text
Abstract
Control systems prototyping is usually constrained by model complexity, embedded system configurations, and interface testing. The proposed control system prototyping of a remotely-operated vehicle (ROV) with a docking hoop (DH) to recover an autonomous underwater vehicle (AUV) named AUVDH using a combination of
[...] Read more.
Control systems prototyping is usually constrained by model complexity, embedded system configurations, and interface testing. The proposed control system prototyping of a remotely-operated vehicle (ROV) with a docking hoop (DH) to recover an autonomous underwater vehicle (AUV) named AUVDH using a combination of software tools allows the prototyping process to be unified. This process provides systematic design from mechanical, hydrodynamics, dynamics modelling, control system design, and simulation to testing in water. As shown in a three-dimensional simulation of an AUVDH model using MATLAB™/Simulink™ during the launch and recovery process, the control simulation of a sliding mode controller is able to control the positions and velocities under the external wave, current, and tether forces. In the water test using the proposed Python-based GUI platform, it shows that the AUVDH is capable to perform station-keeping under the external disturbances. Full article
(This article belongs to the Special Issue Underwater Robotics)
Open AccessArticle Static Stability Analysis of a Planar Object Grasped by Multifingers with Three Joints
Robotics 2015, 4(4), 464-491; doi:10.3390/robotics4040464
Received: 25 June 2015 / Revised: 22 October 2015 / Accepted: 22 October 2015 / Published: 3 November 2015
Viewed by 1066 | PDF Full-text (949 KB) | HTML Full-text | XML Full-text
Abstract
This paper discusses static stability of a planar object grasped by multifingers with three joints. Each individual joint (prismatic joint or revolute joint) is modeled as a linear spring stiffness. The object mass and the link masses are also included. We consider not
[...] Read more.
This paper discusses static stability of a planar object grasped by multifingers with three joints. Each individual joint (prismatic joint or revolute joint) is modeled as a linear spring stiffness. The object mass and the link masses are also included. We consider not only pure rolling contact but also frictionless sliding contact. The grasp stability is investigated using the potential energy method. This paper makes the following contributions: (i) Grasp wrench vectors and grasp stiffness matrices are analytically derived not only for the rolling contact but also for the sliding contact; (ii) It is shown in detail that the vectors and the matrices are given by functions of grasp parameters such as the contact conditions (rolling contact and sliding contact), the contact position, the contact force, the local curvature, the link shape, the object mass, the link masses, and so on; (iii) By using positive definiteness of the difference matrix of the grasp stiffness matrices, it is analytically proved that the rolling contact grasp is more stable than the sliding contact grasp. The displacement direction affected by the contact condition deviation is derived; (iv) By using positive definiteness of the differential matrix with respect to the local curvatures, it is analytically proved that the grasp stability increases when the local curvatures decrease. The displacement direction affected by the local curvature deviation is also derived; (v) Effects of the object mass and the joint positions are discussed using numerical examples. The numerical results are reinforced by analytical explanations. The effect of the link masses is also investigated. Full article
Figures

Open AccessArticle IMPERA: Integrated Mission Planning for Multi-Robot Systems
Robotics 2015, 4(4), 435-463; doi:10.3390/robotics4040435
Received: 17 July 2015 / Revised: 15 October 2015 / Accepted: 21 October 2015 / Published: 30 October 2015
Viewed by 1560 | PDF Full-text (15987 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents the results of the project IMPERA (Integrated Mission Planning for Distributed Robot Systems). The goal of IMPERA was to realize an extraterrestrial exploration scenario using a heterogeneous multi-robot system. The main challenge was the development of a multi-robot planning and
[...] Read more.
This paper presents the results of the project IMPERA (Integrated Mission Planning for Distributed Robot Systems). The goal of IMPERA was to realize an extraterrestrial exploration scenario using a heterogeneous multi-robot system. The main challenge was the development of a multi-robot planning and plan execution architecture. The robot team consists of three heterogeneous robots, which have to explore an unknown environment and collect lunar drill samples. The team activities are described using the language ALICA (A Language for Interactive Agents). Furthermore, we use the mission planning system pRoPhEt MAS (Reactive Planning Engine for Multi-Agent Systems) to provide an intuitive interface to generate team activities. Therefore, we define the basic skills of our team with ALICA and define the desired goal states by using a logic description. Based on the skills, pRoPhEt MAS creates a valid ALICA plan, which will be executed by the team. The paper describes the basic components for communication, coordinated exploration, perception and object transportation. Finally, we evaluate the planning engine pRoPhEt MAS in the IMPERA scenario. In addition, we present further evaluation of pRoPhEt MAS in more dynamic environments. Full article
Open AccessArticle Performance of Very Small Robotic Fish Equipped with CMOS Camera
Robotics 2015, 4(4), 421-434; doi:10.3390/robotics4040421
Received: 28 July 2015 / Revised: 24 September 2015 / Accepted: 16 October 2015 / Published: 22 October 2015
Viewed by 1336 | PDF Full-text (1270 KB) | HTML Full-text | XML Full-text
Abstract
Underwater robots are often used to investigate marine animals. Ideally, such robots should be in the shape of fish so that they can easily go unnoticed by aquatic animals. In addition, lacking a screw propeller, a robotic fish would be less likely to
[...] Read more.
Underwater robots are often used to investigate marine animals. Ideally, such robots should be in the shape of fish so that they can easily go unnoticed by aquatic animals. In addition, lacking a screw propeller, a robotic fish would be less likely to become entangled in algae and other plants. However, although such robots have been developed, their swimming speed is significantly lower than that of real fish. Since to carry out a survey of actual fish a robotic fish would be required to follow them, it is necessary to improve the performance of the propulsion system. In the present study, a small robotic fish (SAPPA) was manufactured and its propulsive performance was evaluated. SAPPA was developed to swim in bodies of freshwater such as rivers, and was equipped with a small CMOS camera with a wide-angle lens in order to photograph live fish. The maximum swimming speed of the robot was determined to be 111 mm/s, and its turning radius was 125 mm. Its power consumption was as low as 1.82 W. During trials, SAPPA succeeded in recognizing a goldfish and capturing an image of it using its CMOS camera. Full article
(This article belongs to the Special Issue Underwater Robotics)
Open AccessArticle Robotic Design Choice Overview Using Co-Simulation and Design Space Exploration
Robotics 2015, 4(4), 398-420; doi:10.3390/robotics4040398
Received: 17 June 2015 / Revised: 19 August 2015 / Accepted: 8 September 2015 / Published: 29 September 2015
Cited by 1 | Viewed by 1215 | PDF Full-text (4407 KB) | HTML Full-text | XML Full-text
Abstract
Rapid robotic system development has created a demand for multi-disciplinary methods and tools to explore and compare design alternatives. In this paper, we present a collaborative modeling technique that combines discrete-event models of controller software with continuous-time models of physical robot components. The
[...] Read more.
Rapid robotic system development has created a demand for multi-disciplinary methods and tools to explore and compare design alternatives. In this paper, we present a collaborative modeling technique that combines discrete-event models of controller software with continuous-time models of physical robot components. The proposed co-modeling method utilizes the Vienna development method (VDM) and MATLAB for discrete-event modeling and 20-sim for continuous-time modeling. The model-based development of a mobile robot mink feeding system is used to illustrate the collaborative modeling method. Simulations are used to evaluate the robot model output response in relation to operational demands. An example of a load-carrying challenge in relation to the feeding robot is presented, and a design space is defined with candidate solutions in both the mechanical and software domains. Simulation results are analyzed using design space exploration (DSE), which evaluates candidate solutions in relation to preselected optimization criteria. The result of the analysis provides developers with an overview of the impacts of each candidate solution in the chosen design space. Based on this overview of solution impacts, the developers can select viable candidates for deployment and testing with the actual robot. Full article
Open AccessArticle Multi-Robot Item Delivery and Foraging: Two Sides of a Coin
Robotics 2015, 4(3), 365-397; doi:10.3390/robotics4030365
Received: 30 June 2015 / Revised: 14 September 2015 / Accepted: 15 September 2015 / Published: 23 September 2015
Cited by 1 | Viewed by 955 | PDF Full-text (1214 KB) | HTML Full-text | XML Full-text
Abstract
Multi-robot foraging has been widely studied in the literature, and the general assumption is that the robots are simple, i.e., with limited processing and carrying capacity. We previously studied continuous foraging with slightly more capable robots, and in this article, we are interested
[...] Read more.
Multi-robot foraging has been widely studied in the literature, and the general assumption is that the robots are simple, i.e., with limited processing and carrying capacity. We previously studied continuous foraging with slightly more capable robots, and in this article, we are interested in using similar robots for item delivery. Interestingly, item delivery and foraging are two sides of the same coin: foraging an item from a location is similar to satisfying a demand. We formally define the multi-robot item delivery problem and show that the continuous foraging problem is a special case of it. We contribute distributed multi-robot algorithms that solve the item delivery and foraging problems and describe how our shared world model is synchronized across the multi-robot team. We performed extensive experiments on simulated robots using a Java simulator, and we present our results to demonstrate that we outperform benchmark algorithms from multi-robot foraging. Full article
Open AccessArticle Navigation of an Autonomous Tractor for a Row-Type Tree Plantation Using a Laser Range Finder—Development of a Point-to-Go Algorithm
Robotics 2015, 4(3), 341-364; doi:10.3390/robotics4030341
Received: 30 July 2015 / Revised: 27 August 2015 / Accepted: 31 August 2015 / Published: 7 September 2015
Cited by 1 | Viewed by 1164 | PDF Full-text (2429 KB) | HTML Full-text | XML Full-text
Abstract
It is challenging to develop a control algorithm that uses only one sensor to guide an autonomous vehicle. The objective of this research was to develop a control algorithm with a single sensor for an autonomous agricultural vehicle that could identify landmarks in
[...] Read more.
It is challenging to develop a control algorithm that uses only one sensor to guide an autonomous vehicle. The objective of this research was to develop a control algorithm with a single sensor for an autonomous agricultural vehicle that could identify landmarks in the row-type plantation environment and navigate a vehicle to a point-to-go target location through the plantation. To enable such a navigation system for the plantation system, a laser range finder (LRF) was used as a single sensor to detect objects and navigate a full-size autonomous agricultural tractor. The LRF was used to control the tractor as it followed a path, and landmarks were detected “on-the-go” in real time. The landmarks were selected based on data for their distances calculated by comparison with the surrounding objects. Once the landmarks were selected, a target point was calculated from the landmarks, and the tractor was navigated toward the target. Navigation experiments were successfully conducted on the selected paths without colliding with the surrounding objects. A real time kinematic global positioning system (RTK GPS) was used to compare the positioning between the autonomous control and manual control. The results of this study showed that this control system could navigate the autonomous tractor to follow the paths, and the vehicle position differed from the manually driven paths by 0.264, 0.370 and 0.542 m for the wide, tight, and U-turn paths, respectively, with directional accuracies of 3.139°, 4.394°, and 5.217°, respectively, which are satisfactory for the autonomous operation of tractors on rubber or palm plantations. Therefore, this laser-based landmark detection and navigation system can be adapted to an autonomous navigation system to reduce the vehicle`s sensor cost and improve the accuracy of the positioning. Full article
Figures

Open AccessArticle A Spatial Queuing-Based Algorithm for Multi-Robot Task Allocation
Robotics 2015, 4(3), 316-340; doi:10.3390/robotics4030316
Received: 13 May 2015 / Revised: 30 July 2015 / Accepted: 21 August 2015 / Published: 28 August 2015
Viewed by 1066 | PDF Full-text (967 KB) | HTML Full-text | XML Full-text
Abstract
Multi-robot task allocation (MRTA) is an important area of research in autonomous multi-robot systems. The main problem in MRTA is to allocate a set of tasks to a set of robots so that the tasks can be completed by the robots while ensuring
[...] Read more.
Multi-robot task allocation (MRTA) is an important area of research in autonomous multi-robot systems. The main problem in MRTA is to allocate a set of tasks to a set of robots so that the tasks can be completed by the robots while ensuring that a certain metric, such as the time required to complete all tasks, or the distance traveled, or the energy expended by the robots is reduced. We consider a scenario where tasks can appear dynamically and a task needs to be performed by multiple robots to be completed. We propose a new algorithm called SQ-MRTA (Spatial Queueing-MRTA) that uses a spatial queue-based model to allocate tasks between robots in a distributed manner. We have implemented the SQ-MRTA algorithm on accurately simulated models of Corobot robots within the Webots simulator for different numbers of robots and tasks and compared its performance with other state-of-the-art MRTA algorithms. Our results show that the SQ-MRTA algorithm is able to scale up with the number of tasks and robots in the environment, and it either outperforms or performs comparably with respect to other distributed MRTA algorithms. Full article
(This article belongs to the Special Issue Coordination of Robotic Systems)

Years

Article Types

Refine Types

Countries

All Countries Refine Countries
Back to Top