Next Article in Journal
TAMRA/TAMRA Fluorescence Quenching Systems for the Activity Assay of Alkaline Phosphatase
Next Article in Special Issue
Performance Analysis of ToA-Based Positioning Algorithms for Static and Dynamic Targets with Low Ranging Measurements
Previous Article in Journal
Indoor Scene Point Cloud Registration Algorithm Based on RGB-D Camera Calibration
Previous Article in Special Issue
Stairs and Doors Recognition as Natural Landmarks Based on Clouds of 3D Edge-Points from RGB-D Sensors for Mobile Robot Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments

Lab of Robotics and Dynamics, Faculty of Engineering, Hokkaido University, Sapporo 060-8628, Japan
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(8), 1878; https://doi.org/10.3390/s17081878
Submission received: 16 June 2017 / Revised: 9 August 2017 / Accepted: 12 August 2017 / Published: 15 August 2017

Abstract

:
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.

1. Introduction

Hitchhiking is a social and symbiotic behavior with many potential advantages for the hitchhiker. The hitchhiker: (a) does not need to do path planning; (b) does not need to localize himself while navigation; (c) does not need to worry about control sequences (ex. car steering); (d) does not need to do obstacle avoidance, and (e) saves energy and money. Now consider a multi-robot system deployed at a large infrastructure like a warehouse for object delivery, cleaning, patrolling, and other tasks. Each robot of the multi-robot system needs to navigate from one location to other to perform its task. The paths of the robots often overlap completely or partially. Moreover, each robot is equipped with its own path planning unit, localization unit, obstacle avoidance unit, and SLAM (Simultaneous Localization and Mapping) [1,2] unit. In the context of hitchhiking, following questions naturally arise: Why should two robots with completely or partially overlapping paths separately perform their own path planning, collision avoidance, localization, and new obstacle update? Can’t one robot just hitchhike another robot going to the (nearly) same location, and skip most of the redundant operations?
Although there exists a plethora of literature on leader-follower multi-robot systems, the presented work is the first to address this question and demonstrate and discuss the feasibility and advantages of hitchhiking in multi-robot systems in various scenarios, to the best of our survey. Multi-robot systems and especially leader-follower robot systems have been presented earlier, however to realize different objectives. A multi-robot leader-follower system using two tractor robots has been proposed in [3,4] to improve efficiency in agricultural tasks in which the focus is on cooperation and coordination to execute a turn without collision. Many works have been proposed in collective localization [5] and mapping [6,7,8]. However, in all these works, the robots are independent entities and the emphasis is on using sensor data from different robots to collectively build maps and localize in the environment. An interesting technique to arrange robot rendezvous, i.e., pairing of two robots in an unknown environment is proposed [9] for collaborative exploration. Similarly, multi-robot navigation has been proposed [10,11,12] with focus on collision avoidance and area coverage. A semi-autonomous teleoperation system is proposed in [13] in which the follower robot satisfies several constraints while tracking the leader robot. The emphasis is on making the follower robot semi-autonomous when teleoperating the follower is difficult in cluttered environments. A similar scheme is presented in [14] for efficient teleoperational control of master-slave robots.
All the aforementioned works make important contributions but with different goals. To the best of our knowledge, none of the research has discussed how a follower robot can skip path-planning, obstacle avoidance, localization, and map update by completely relying on the leader robot for navigation towards a (nearly) common goal location. We experimentally show the feasibility of such ‘hitchhking’ with experimental results in real environment. We particularly focus on strategies in which the hitchhiker does not lose any information (like new obstacles) in the environment by shutting down some of its modules, and is able to recover in case of failure in following the driver robot.

2. Hitchhiking in Robots

The following terminologies are used throughout the paper: (a) Driver: is the leader robot in the leader-follower scene. Driver does all the path planning, obstacle avoidance, map update, and assists the follower robot in the hitchhiking process; (b) Hitchhiker: is the robot which follows the driver through visual servoing, and shuts down other modules.

2.1. Hitchhiking Mechanism

Hitchhiking is achieved through visual servoing. A complete description of visual servoing is beyond the scope of the main idea of the proposed work. However, a brief description is provided.
Visual servoing [15], also known as vision-based robot control is a technique which uses feedback information extracted from a vision sensor (like camera) to control the motion of a robot. Visual Servoing control techniques are broadly classified into the following types [16]: (a) Image-based Visual Servoing (IBVS) [17], which calculates the error between desired features (ex. lines, points) in images without estimating the pose of the target and has problems with large rotational motions; (b) Position/pose-based Visual Servoing (PBVS) [18], a model-based technique which estimates also the pose of the target with respect to the camera using the extracted image features enabling servoing in 3D; (c) Hybrid approach based Visual Servoing [19] which uses a combination of the 2D and 3D servoing.
A set of visual features m are extracted from a set of visual measurements x ( t ) which comprises of coordinates of points of interest, i.e.,
m = m ( x ( t ) ) ,
which allows the required degrees of freedom [20]. For correct realization, a controller is designed such that the features m reach a desired value m * , such that the error vector m m * is zero. In vision based control, the objective is to minimize an error e ( t ) , where,
e ( t ) = m m * .
The required trajectory m * ( t ) is generated and details can be found in [16,21].
Visual servoing is generally implemented through an artificial marker and a camera. In our implementation, each robot has a QR-code which is fixed on the back side of the robot. Each robot also has a forward facing camera. The QR-code and camera setup is shown in Figure 1a.

2.2. Four Steps of Hitchhiking

Each robot of the multi-robot system is equipped with path planning, localization and mapping, obstacle avoidance, communication, and other necessary modules. Each robot is provided with a unique ID ( R id ). They also have respective start ( S loc ) and goal location ( G loc ), task priority ( T p ), and profile ( P id ). A profile comprises of the specifications of the robot, i.e., the type of sensors attached to the robot, accuracy of the odometers and sensors, and robustness of the SLAM module. In actual implementation, a numerical value is manually assigned against each attribute (the better the attribute, higher the value). A profile provides logical parameters to compare the specifications of the two robots.
Hitchhiking is carried out in the four steps which are graphically shown in Figure 2 and explained below:
  • Handshake: The hitchhiker keeps broadcasting requests to a potential driver until a threshold hitchhike wait time T hwait . Once a (potential) driver responds, the two robots exchange information. The hitchhiker request comprises of { R h , G h , P h } , where, R h is the ID, G h is the goal location, and P h is the profile of the hitchhiking robot. The potential driver robot checks if the length of the common path ( d hh ) traversed during hitchhiking is longer than a threshold distance ( T dhh ). This is graphically shown in Figure 3, in which, S d and S h are the start locations, and G d and G h are the goal locations of the driver and hitchhiker robots, respectively. The common path between the points A and B is the hitchhiking distance ( d hh ). Hitchhiking is allowed if d hh T dhh . Hitchhiking is denied for shorter distance (less than T dhh ) due to the overhead involved in coupling and decoupling. Moreover, hitchhiking over shorter distances affects the service time. The threshold hitchhiking length ( T dhh ) depends on many factors like the dynamics of the environment, and the characteristics of the SLAM algorithm employed. A typical setting involves setting T dhh to several meters (ex. ≈20 m). Notice that, from Figure 3, if d hh T dhh , hitchhking is allowed even if the nodes G d and G h are far from each other. The best case scenario for hitchhking is when G d and G h completely overlap. This entire process is called a handshake. If no potential driver is found until T hwait time, the hitchhiker navigates towards the goal on its own. A driver with high task priority ( T d ) will simply ignore any requests from a hitchhiker.
  • Coupling: The next step is coupling between the hitchhiker and the driver. Coupling is defined as the process in which the hitchhiker aligns behind the driver robot and the QR-code behind the driver is recognized to initiate visual servoing. The alignment and coupling are only allowed within a threshold time T align and T coupling , respectively. In order to assist coupling, the environment is marked with special pre-defined markers known to all the robots. Certain positions with markers are also reserved to further assist coupling.
  • Navigation: Once the robots are coupled, the driver initiates navigation towards the goal. During this time, the hitchhiker shut downs all the processes except visual servoing. In other words, the hitchhiker shuts down the localization, path planning, obstacle avoidance, and map update modules. It simply follows the driver robot using visual servoing. The driver robot executes all the modules.
  • Decoupling: Once the two robots have reached the destination, the decoupling process is initiated where visual servoing stops. During decoupling, the driver gives the current position location (i.e., the estimated x , y , θ pose in the map) and the uncertainty associated with it ( Σ ). This information must be given to the hitchhiker as it requires it as an initial estimate to localize itself in the map to navigate to another location. Moreover, during navigation if the driver robot has updated its map with the location of the new static obstacles ( Ω ), this information is also transferred to the hitchhiker to update its local map. This ensures that there is no loss of information during navigation for the hitchhiker.
The hitchhiker can thus skip redundant computation from the hitchhiking point to the decoupling location without any information loss.

3. Hitchhiking Points

Although hitchhiking can be initiated at any place in the map, it is generally not a good idea to do so because of the following two reasons:
  • Loss of Efficiency: Hitchhiking consumes time in alignment and coupling. By allowing hitchhiking anywhere in the map, the robots may not find fixed markers in the environment to assist coupling. In the absence of such markers, coupling is difficult and the robots consume more time. Moreover, the robots must always be alert of an incoming request from a hitchhiker.
  • Problem of Obstacles: Since it consumes time for the two robots to align and couple, hitchhiking at an inappropriate place is a hindrance in the pathways for other robots and people.
Therefore, we propose certain fixed ‘hitchhiking points’ in the map which are laid with artificial markers to assist coupling. It eliminates the aforementioned two problems and gives an estimate to the robots about a possibility of hitchhiking. Its major benefit is that the driver robot knows exactly where to stop, and trajectories can be generated beforehand for alignment. As shown in Figure 4a, the hitchhiker stands pre-aligned broadcasting requests, while the driver aligns with the marker at a certain distance ( d 1 ). This arrangement automatically helps in the alignment process for coupling. Figure 4b shows the real example of such alignment and coupling at hitchhiking point.

3.1. When to Hitchhike and When Not

The best possible scenario for hitchhiking is when the start and goal locations of the hitchhiker and driver are the same. Hitchhiking is good for different goals if the common path distance is large for navigation.
In case the goal locations of the two robots are different, the point of decoupling in the map can easily be calculated. Our implementation uses A* [22] which is a famous algorithm for path planning. Let G = ( V , E ) is a graph with non-negative edge distances, and h is an admissible heuristic. Let S hp be the hitchhiking point which marks the start location and T h be the end node of the hitchhiker robot. If d ( v ) is the shortest distance from S hp to v seen so far, then d ( v ) + h ( v ) gives an estimate of the distance from S hp to v , and similarly from v to T h . The queue of nodes Q h = ( V 1 , V 2 , , V n ) sorted by d ( v ) + h ( v ) is the A* path from S hp to T h . Similarly, if Q d is the sorted node queue of the hitchhiker robot to T d , then the farthest node in Q d Q h is the node of decoupling in the map.
However, hitchhiking is not good for short distances due to the extra time required for alignment, coupling, and decoupling which may adversely affect the service response time. Moreover, a hitchhiker always hitchhikes a robot with better or same profile score ( P id ) checked during the handshake step. This is to ensure that, ‘you don’t trust suspicious drivers’. This is natural as a robot with a robust and accurate SLAM module should never hitchhike another robot with less accurate sensors and SLAM module as doing so has risks of task failure for both the robots. On the other hand, a hitchhiker with a lower profile can always benefit from a driver with good profile in terms of accuracy of maps, navigation, obstacle avoidance, and localization.

4. Problem of ‘Driver Lost’ Scenario

One potential problem in hitchhiking is that during navigation the driver might be ‘lost’. This particularly depends on the robustness of the visual servoing algorithm employed. Visual servoing, particularly image based visual servoing is not very robust to large rotations. A detailed explanation of problems in visual servoing can be found summarized in the work of C. Francois [23] and can result in a driver lost scenario in which the follower robot is left behind while the driver robot navigates to its goal.
Since the hitchhiker has just visual servoing module in execution, if the driver is lost then it is difficult for the hitchhiker to localize itself in the map as it is completely unaware of its current position in the map. This problem is similar to the famous ‘kidnapped robot problem’ for which solutions are available in literature [24,25,26]. However, we propose to recover from this problem in the first place by transferring the current estimated pose ( x δ , y δ , θ δ ) and associated uncertainty ( Σ δ ) information to the hitchhiker intermittently in intervals of δ s. This is graphically shown in Figure 5 where a driver is shown transferring information intermittently. With this scheme, even if the driver robot is lost due to failure of visual servoing, the hitchhiker still has a rough initial estimate to localize itself in the map, and navigate towards the goal independently.
As shown in Figure 5, the hitchhiker acknowledges the receipt of the intermittent information ( x δ , y δ , θ δ ). Through this acknowledgement message ( ack ), the driver robot can know that the hitchhiker is following well and continue its navigation. In the absence of ack message, the driver stops for the hitchhiker to catch-up. A straightforward pseudo-code for hitchhiking is given in Algorithms 1 and 2 for driver and hitchhiker robots, respectively.
Algorithm 1: Hitchhiking Pseudocode (Driver Robot)
Sensors 17 01878 i001
Algorithm 2: Hitchhiking Pseudocode (Hitchhiker)
Sensors 17 01878 i002

5. Award Mechanism

An award mechanism is designed in which the driver robot is awarded with points for assisting other robots in hitchhiking. The points are directly proportional to the distance traveled, i.e., Points = κ · f ( hhike _ start , hhike _ end ) , where κ is a constant and the function f ( · ) calculates the distance between the start and end locations of hitchhiking. Robots try to increase their points whenever possible. Two constraints check that the task performance is not compromised: (a) Hitchhiking is not allowed over short distances; (b) Hitchhiking is only allowed when the driver robot has a better or same profile, and either robot has no time-critical task at hand.

6. State of the Art in Robot Localization

In this section, we discuss the state of the art related to robot localization and mapping in indoor environments. It is important for the driver and the hitchhiker robots to localize themselves in the map to ascertain the hitchhiking points, and the map needs to be updated by the driver robot. SLAM is a challenging problem due to the uncertainties of sensors and robot motion [27]. Bayes filters [28] are the most common tools to mathematically model these uncertainties. There are many variants of Bayes filter like Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), and Particle Filter (PF).
An Unscented Kalman Filter (UKF) performs a stochastic linearization through statistical linear regression process. The main idea behind UKF is that it is easier to approximate the probability function than the nonlinear function. A detailed explanation of UKF can be found in [29,30,31,32,33,34,35].
Particle Filter (PF) is a non-linear state estimator based on Bayesian filtering. It is a sequential Monte Carlo based technique which models the probability density using a set of discrete points. PF can represent a much broader space of distributions than, for example, Gaussians. A detailed explanation of Particle Filter can be found in [31,33,35,36,37].
Apart from the above mentioned filters, there are many other filters like histogram filter, information filter, and others which have been explained in detail in [31,33,34]. All of these filters can estimate the state of the robot and the uncertainty associated with the estimate. Readers may find a comparison of the merits and limitations of various filters in [38].
Recently, several fusion filters which are more robust have also been proposed. Work in [39] proposes a Fusion-Kalman/UFIR Filter. It is a fusion of Kalman Filter which is optimal but not robust, with the unbiased finite-impulse response (UFIR) filter which is more robust than KF but not optimal. The fusion is achieved by employing KF and UFIR as subfilters which provides small errors at the point where UFIR meets Kalman by applying probabilistic weights to each subfilter. A novel Deadbeat Dissipative FIR Filter with a finite impulse response (FIR) structure for linear discrete-time systems with external disturbance is proposed in [40] which ensures ( Q , S , R ) - α -dissipativity based on three slack matrix variables. A Hybrid-Particle/FIR Filter to improve the reliability of Particle Filter based localization has been proposed in [41,42] which detects PF failure and recovers localization by resetting the PF using the output of an auxiliary FIR filter. This makes PF which traditionally suffers from the sample impoverishment in noisy environments to be more reliable. Particularly, graph based SLAM algorithms [43], in which, a graph whose nodes correspond to the poses of the robot at different points in time and whose edges represent constraints between the poses, have been shown to be very successful. A detailed survey of SLAM techniques in the past 30 years, considering future challenges can be found in [44].
Notice that, the proposed hitchhiking in multi-robot systems is not limited to any one particular localization approach. In fact, any of the mapping and localization methods can be used with the robots. Obviously, the more robust and accurate the localization algorithm employed, more accurate is the navigation. The choice of the SLAM algorithm used may depend on factors like the sensors and computation devices available. Although any of the filters can be used, for the sake of completeness, we briefly discuss robot localization with Extended Kalman Filter (EKF).

6.1. Driver Robot Localization with Extended Kalman Filter

EKF based SLAM and localization has extensively been employed with mobile robots. Other variants of EKF like particle filters, and unscented kalman filter (UKF) are also very popular. EKF is a powerful mathematical tool to model the uncertainties of the sensors attached to the robot, and has been demonstrated successfully with laser range sensors, vision sensors, and intertial sensors, etc. An elaborated description of EKF can be found in [31].
The state of the robot ( x t ) at time t is indicated by a vector comprising of its pose [ x y ] T and orientation ( θ ) as, x t = [ x y θ ] T . EKF assumes a Gaussian distribution in which the belief b e l ( x t ) at time t is given by the mean μ t and the covariance Σ t . At the start of navigation, the robot assumes no uncertainty, and Σ t = 0 is set to a zero matrix. The robot is made to move by issuing a command which comprises of the translation velocity ( v t ) and rotational velocity ( ω t ) as [ v t ω t ] T .
θ μ t 1 , θ
To handle the non-linearity of the system, EKF uses Jacobians of motion and control functions. The Jacobian of motion function with respect to state is given by,
G t 1 0 v t ω t cos θ + v t ω t cos ( θ + ω t Δ t ) 0 1 v t ω t sin θ + v t ω t sin ( θ + ω t Δ t ) 0 0 1 ,
and the Jacobian of motion with respect to control is given by,
V t = sin θ + s i n ( θ + ω t Δ t ) ω t v t ( sin θ sin ( θ + ω t Δ t ) ) ω t 2 + v t ( cos ( θ + ω t Δ t ) Δ t ) ω t cos θ c o s ( θ + ω t Δ t ) ω t v t ( cos θ cos ( θ + ω t Δ t ) ) ω t 2 + v t ( sin ( θ + ω t Δ t ) Δ t ) ω t 0 Δ t .
With robot specific error-parameters α 1 , , α 4 , the covariance of noise in control space is given by,
M t = α 1 v t 2 + α 2 ω t 2 0 0 α 3 v t 2 + α 4 ω t 2 .
Here, α 1 , , α 4 are robot specific parameters. They are determined empirically and vary from robot to robot [31]. The prediction updates in state ( μ ¯ t ) and covariance ( Σ ¯ t ) are given by,
μ ¯ t = μ t 1 + v t ω t sin θ + v t ω t sin ( θ + ω t Δ t ) v t ω t cos θ v t ω t cos ( θ + ω t Δ t ) ω t Δ t ,
and,
Σ ¯ t = G t Σ t 1 G t + V t M t V t T ,
respectively. The mapping from motion noise in control space to motion noise in state space is provided by the term V t M t V t T in Equation (8).
To model the correction step, we assume that the sensors provide the range ( r t ), bearing ( ϕ t ), and signature ( s t , for ex. color) of the landmark relative to the robot’s current pose ( x t ). The covariance ( Q t ) of the sensor noise is given by the matrix,
Q t = σ r 2 0 0 0 σ ϕ 2 0 0 0 σ s 2 .
Let [ m i x m i y ] T be the coordinates of the ith landmark obtained by measurement z t i = [ r t i ϕ t i s t i ] T from the current pose μ ¯ t , and q represent the squared distance as,
q = ( m k , x μ ¯ t , x ) 2 + ( m k , y μ ¯ t , y ) 2 ,
then, we have,
z ^ t k = q a t a n 2 ( m k , y μ ¯ t , y , m k , x μ ¯ t , x ) μ ¯ t , θ m k , s .
The Jacobian of measurement with respect to state is given by,
H t k = m k , x μ ¯ t , x q m k , y μ ¯ t , y q 0 m k , y μ ¯ t , y q m k , x μ ¯ t , x q 1 0 0 0
This gives the measurement covariance matrix as,
S t k = H t k Σ t ¯ [ H t k ] T + Q t .
Maximum likelihood estimate is applied for all the k landmarks (Equations (10)–(13)) in the map to calculate the most likey correspondence j ( i ) as,
j ( i ) = argmax 1 det ( 2 π S t k ) e 1 2 ( z t i z ^ t k ) T [ S t k ] 1 ( z t i z ^ t k ) .
The calculation of Kalman gain ( K t ) and EKF updates for state ( μ t ) and covariance ( Σ t ) only corresponds to this most likely estimate,
K t i = Σ ¯ t [ H t j ( i ) ] T [ S t j ( i ) ] 1 μ t = μ ¯ t + K t i ( z t i z ^ t j ( i ) ) Σ t = ( I K t i H t j ( i ) ) Σ ¯ t
Thus, at each time step (t), a Kalman gain ( K t ) is calculated from which the state ( μ t ) and covariance ( Σ t ) are updated by the robot. In traditional navigation schemes, each robot of the multi-robot system must execute localization using the above mentioned computationally expensive steps. However, in the presented scheme only the driver robot executes localization while the hitchhiker follows the driver using visual servoing.

6.2. Pose Transfer during Decoupling

During decoupling the driver transfers its pose ( P d = [ x d y d θ d ] T ) to the hitchhiker robot and the uncertainty associated with it ( Σ d ). It is required as an initial estimate for the hitchhiker robot to localize itself in the map to navigate to another location.
Since the hitchhiker follows the driver using the QR-code and camera setup, the final orientation of the hitchhiker ( θ h ) is same as that of the driver robot, i.e.,
θ h = θ d .
If d is the distance between the hitchhiker and the driver during decoupling, then the pose of hitchhiker is calculated as (Figure 6),
P h = [ ( x d d · cos θ h ) ( y d d · sin θ h ) θ d ] T .
Moreover, the hitchhiker assumes the same uncertainty in its pose as the driver robot, i.e.,
Σ h = Σ d .
The hitchhiker robot uses this pose ( P h ) to localize itself in the map. It can use the uncertainty ( Σ h ) information to consider the distribution of particles (for ex. in case of particle filter [31,33,36,37]) by taking the Eigenvalue-Eigenvector decomposition of Σ h . Eigenvalues ( λ 1 , , λ n ) and eigenvectors ( v 1 , , v n ) of the matrix Σ h gives the magnitude and direction of variance, respectively for considerable distribution of particle poses.

7. Experimental Results

This section presents the results of the experiments. We first show the motion model of the robots used in the experiment, and then discuss the results in different cases of hitchhiking.

7.1. Motion Model

We used Pioneer-P3DX (Figure 7a) [45] and Kobuki Turtlebot (Figure 7b) [46] robot which are two wheeled differential drive robots [47]. Both the robots were equipped with distance sensors (Microsoft Kinect [48] and UHG-08LX laser range sensor [49]) and cameras. The distance sensor is accurate within ± 30 mm within 1m, and within 3% of the detected distance between 1 and 8 m. The angular resolution is approx 0.36 degrees, and other specifications can be found in [49]. We first describe the motion model of the robot. The distance between the left and the right wheel is W r , and the robot state at position P, is given as [ x , y , θ ]. From Figure 7c, turning angle β is calculated as,
r = β · ( R + W r ) , l = β · R β = r l W r
and the radius of turn R as,
R = l β , β 0 .
The coordinates of the center of rotation (C, in Figure 7c), are calculated as,
C x C y = x y R + W r 2 · sin θ cos θ
The new heading θ is,
θ = ( θ + β ) mod 2 π ,
from which the coordinates of the new position P are calculated as,
x y = C x C y R + W r 2 · sin θ cos θ , β 0 r l .
If r = l , i.e., if the robot motion is straight, the state parameters are given as, θ = θ , and,
x y = x y + l · cos θ sin θ , ( l = r ) .
Different experiments in real indoor environment were performed to test the proposed hitchhiking in various scenarios. Pioneer P3DX was the driver robot, and Turtlebot initated hitchhiking in all the cases. Both the robots used ROS [50] on Ubuntu computer and were on the same network to communicate with each other. Figure 8 shows a simplified view of ROS topics and nodes. The hitchhike communication module publishes an appropriate message on a topic which is subscribed by the other modules to turn them on or off.
For visual servoing, we used a modified open source Visp (Visual Servoing Platform) library [20,51,52] for online tracking. The initial pose was set by extracting the location of the four QR-code corners using a Perspective-n-Point (PnP) algorithm [53] from which a model based tracker was initialized to extract the black area around the QR-code. A hybrid approach for tracking edges and keypoint features was employed to estimate the pose. The robots exchanged messages in JSON format for which JSMN parser was used [54]. A sample JSON message is given in Appendix A: Listing 1.

7.2. Experiments in Which Hitchhiking Was Allowed

Three sets of the following two experiments were conducted with permissible initial conditions of hitchhiking. For comparison, the two experiments were performed with and without the new obstacles in the environment.

7.2.1. Experiment 1

Experiment 1 was carried out in the corridor (≈23 m long, 2.23 m wide) shown in Figure 9a. In order to test map update and new obstacle coordinate transfer, the boxes marked in red circles in Figure 9a were placed as new obstacles and were not reflected in the old maps of the two robots. Turtlebot waited and initiated the hitchhiking process. The hitchhiking point with a ‘+’ sign marked on the pillar is shown in Figure 9b. Driver P3DX robot aligned itself with the marker on the pillar whereas the Turtlebot stood pre-aligned for visual servoing. The navigation started and Turtlebot followed the driver P3DX as shown in Figure 9c. The hitchhiker Turtlebot had temporarily shut down its modules except visual servoing.
Figure 10 shows the results of the map update process of the experiment. Figure 10a shows part of the map without the obstacles. Both Turtlebot and P3DX had this map before starting the experiment. Figure 10b shows the updated map with the positions of the new obstacles marked in red circles. The driver successfully transferred the coordinates of the new obstacles to the hitchhiker during the decoupling process. The actual values of the coordinates are summarized in Table 1. For the sake of simplicity, both the robots in our implementation had the prior grid maps build from the same location giving them the same anchor which simplified the process of transferring the new obstacle information. In case of map builds from different locations, the relevant transformation can be calculated for 2D [55,56] and 3D [57] cases. The dimensions of the obstacles in Table 1 is in grid pixels.

7.2.2. Experiment 2

Experiment 2 was carried in a long corridor (≈30 m long, 2.92 m wide) shown in Figure 11a. Similar to the previous experiment, the boxes marked in Figure 11a were placed as new obstacles. The hitchhiking Turtlebot was not pre-aligned and there were no artificial markers to assist coupling. Instead, the left wall was used as a landmark. The coupling process is shown in Figure 11b. The navigation started and Turtlebot followed the driver P3DX as shown in Figure 11c.
Figure 12 shows the results of the map update process of the experiment. Figure 12a was the section map without the obstacles. Figure 12b shows the updated map with the positions of the new obstacles marked. The driver successfully transferred the coordinates of the new obstacles to the hitchhiker during the decoupling process which has been summarized in Table 1. The accuracy of obstacle locations were verified manually in both the experiments.
The time taken in coupling, decoupling, and waiting for the driver has been summarized in Figure 13 for the three runs of the two experiments with and without new obstacles. Table 2 shows the average time of the three runs. The total path distance navigated in Experiments 1 and 2 were, 20.4 m and 28.82 m, respectively. To award 10 points to the driver for every 25 m hitchhike, κ (Section 5) was set to 0.4, and the driver robot was awarded 8.2 and 11.5 points, respectively.

7.3. Experiments with Denied Hitchhiking

We performed three more experiments to test scenarios in which hitchhiking should be denied. Experiments were performed in the same setup (Figure 11a) as in experiment 2. (a) In the first case, we set the priority of the driver robot to a high value of 10 (the lowest value being 1 in our implementation). Due to the high task priority, the driver P3DX ignored the requests from the hitchhiker; (b) In the second case, we reversed the profile scores of the two robots i.e., we set the driver P3DX with a profile score of 58, and hitchhiker with a profile score of 90. In this case, the driver stopped upon receiving the request from the hitchhiker, however, the hitchhiking was denied as the the driver had a lesser profile score. Notice that in this case, the two robots can reverse the roles, i.e., the requesting robot with better profile score can be the driver and other hitchhiker. However, we did not consider this case in the current work and will do so in future work; (c) The third case scenario was tested at a wrong hitchhiking location and the driver ignored the requests from the hitchhiker.

7.4. Transferring Quality Maps to Hitchhiker

In both the previous experiments, apart from the occupancy grid map based map update, 3D map update was also performed by the driver robot. The initial point cloud map of the environment before placing new obstacles in experiment 2 is shown in Figure 14a. The final updated 3D point cloud map with the new obstacles is shown in Figure 14b.
Notice that not only the locations of the new obstacles, but high quality 3D maps can also be transferred to the hitchhiker during decoupling. This is particularly useful if the hitchhiker’s profile is poor and it is not equipped with accurate 3D sensors or powerful computational units. In that case, a driver with good sensors and powerful computer can build high quality maps and give to the hitchhiker robot. Algorithms like OctoMap [58] can be used to get reduced sized maps from 3D point cloud maps, as shown in Figure 15.

8. Discussion

The results in Section 7 show that hitchhiking can be implemented in multi-robot systems with a simple setup. The hardware required are just QR-code marker, camera, and wireless communication setup. The software module can be implemented with a module on/off switch and visual servoing module. The measure of computation saved by the hitchhiker lies on many factors like the algorithms employed for path-planning, SLAM, and navigation. It also depends on the computation units, robot specifications, and the topology of the environment. Moreover, since the implementation and execution platforms vary and evolve rapidly, we focus on which redundant modules can be turned off without affecting the quality of service and loss of information for the hitchhiker. Moreover, compared to the other modules, SLAM particularly is a computationally expensive module and the hitchhiker can save substantial computation by skipping it. Table 3 shows the different modules run by the two robots in navigation in the experiments in Section 7. It is clear that the traditional navigation requires all the modules of both the robots to be active. On the other hand, in hitchhiking, most of the modules of the hitchhiker are off. One overhead is visual servoing which is not computationally expensive compared to SLAM (especially 3D SLAM). Another overhead is the delay in service time caused due to hitchhiking. Table 2 summarizes the average time required for coupling, decoupling, waiting for a potential driver, and the driver’s total delay. Although it depends upon the exact task, however, for less critical tasks, a delay of about 70 s should be acceptable.
It is interesting to notice in Table 2 that there is little time difference in the experiments with and without obstacles. The presence of obstacles only affect the actual navigation time. Moreover, in Experiment 2 the hitchhiker was not pre-aligned and there were no markers and therefore the average time for coupling in Experiment 2 was more than that in Experiment 1. This shows that having hitchhiking points with markers in the map can save coupling time. As shown in Section 7.3, hitchhiking is denied for high priority tasks. Moreover, as described in Section 2.2, the hitchhiker only waits for a potential driver for T hwait time, which ensures that service is not affected adversely. Although our implementation dealt with local robot communication, hitchhiking can be very efficient with global communication in a sensor network, as the hitchhiker can know beforehand the passage of a potential driver with desired characteristics near a hitchhiking area. The proposed hitchhiking is not limited to a particular localization algorithm and any of the robust localization algorithms can be employed. Although the results were presented using EKF based localization algorithms, more robust algorithms (ex. [39,40,41,43]) will improve the accuracy of the results. Robustness of the visual servoing and communication modules largely determine the success of hitchhiking.

9. Summary

This paper introduced a novel idea of hitchhiking in multi-robot systems. When two robots have a common path towards the goal, the follower hitchhiker can skip the redundant computation of path planning, obstacle avoidance, map update, and localization. The process of hitchhiking was systematically explained in four steps. Later, the advantages of having hitchhiking points in the map were discussed. Solutions to the problem of ‘driver lost’ scenario were provided. The experimental results show that hitchhiking is feasible in a multi-robot system without any loss of information. Hitchhiking can particularly be useful in multi-robot systems in which some of the robots have less accurate sensors and less powerful computational resources. In future, we plan to test hitchhiking in dynamic environments. We also plan to test the role reversibility of robots based on their profiles.

Supplementary Materials

Download link for video presentation with results: https://www.mdpi.com/1424-8220/17/8/1878/s1.

Author Contributions

Abhijeet Ravankar and Ankit A. Ravankar conceived the idea, designed, and performed the experiments; Yukinori Kobayashi made valuable suggestions to analyze the data and improve the manuscript. Takanori Emaru provided the necessary tools. The manuscript was written by Abhijeet Ravankar.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
R h / d Robot ID of hitchhiker (h)/driver (d)
S h / d Start location of hitchhiker (h)/driver (d)
G h / d Goal location of hitchhiker (h)/driver (d)
T h / d Task priority of hitchhiker (h)/driver (d)
P h / d Robot profile of hitchhiker (h)/driver (d)
T hwait Threshold hitchhike wait time
T align Threshold alignment time
T coupling Threshold coupling time
T dhh Threshold hitchhiking distance
Σ Positional uncertainty of robot
Ω New static obstacles
[ x , y , θ ] T Robot Pose
SLAMSimultaneous Localization and Mapping
EKFExtended Kalman Filter

Appendix A

Listing 1: Example of driver message in JSON format.
Sensors 17 01878 i003

References

  1. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef]
  2. Dissanayake, M.W.M.G.; Newman, P.; Clark, S.; Durrant-Whyte, H.F.; Csorba, M. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 2001, 17, 229–241. [Google Scholar] [CrossRef]
  3. Zhang, C.; Noguchi, N.; Yang, L. Leader follower system using two robot tractors to improve work efficiency. Comput. Electron. Agric. 2016, 121, 269–281. [Google Scholar] [CrossRef]
  4. Zhang, C.; Noguchi, N. Development of leader-follower system for field work. In Proceedings of the 2015 IEEE/SICE International Symposium on System Integration (SII), Nagoya, Japan, 11–13 December 2015; pp. 364–368. [Google Scholar]
  5. Roumeliotis, S.I.; Bekey, G.A. Distributed multirobot localization. IEEE Trans. Robot. Autom. 2002, 18, 781–795. [Google Scholar] [CrossRef]
  6. Howard, A.; Sukhatme, G.S.; Mataric, M.J. Multirobot Simultaneous Localization and Mapping Using Manifold Representations. Proc. IEEE 2006, 94, 1360–1369. [Google Scholar] [CrossRef]
  7. Thrun, S.; Liu, Y. Multi-robot SLAM with Sparse Extended Information Filers. In Robotics Research. The Eleventh International Symposium; Springer Berlin Heidelberg Press: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  8. Atanasov, N.; Ny, J.L.; Daniilidis, K.; Pappas, G.J. Decentralized active information acquisition: Theory and application to multi-robot SLAM. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 4775–4782. [Google Scholar]
  9. Roy, N.; Dudek, G. Collaborative Robot Exploration and Rendezvous: Algorithms, Performance Bounds and Observations. Auton. Robot. 2001, 11, 117–136. [Google Scholar] [CrossRef]
  10. Alonso-Mora, J.; Baker, S.; Rus, D. Multi-robot navigation in formation via sequential convex programming. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 4634–4641. [Google Scholar]
  11. Wee, S.G.; Kim, Y.G.; Lee, S.G.; An, J. Formation control based on virtual space configuration for multi-robot collective navigation. In Proceedings of the 2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, Korea, 30 October–2 November 2013; pp. 556–557. [Google Scholar]
  12. Ravankar, A.; Ravankar, A.; Kobayashi, Y.; Emaru, T. Symbiotic Navigation in Multi-Robot Systems with Remote Obstacle Knowledge Sharing. Sensors 2017, 17, 1581. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, Y.C.; Chopra, N. Semi-autonomous teleoperation in task space with redundant slave robot under communication delays. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 679–684. [Google Scholar]
  14. Usmani, N.A.; Kim, T.H.; Ryu, J.H. Dynamic authority distribution for cooperative teleoperation. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 5222–5227. [Google Scholar]
  15. Hutchinson, S.; Hager, G.D.; Corke, P.I. A tutorial on visual servo control. IEEE Trans. Robot. Autom. 1996, 12, 651–670. [Google Scholar] [CrossRef]
  16. Chaumette, F.; Hutchinson, S. Visual servo control, Part I: Basic approaches. IEEE Robot. Autom. Mag. 2006, 13, 82–90. [Google Scholar] [CrossRef]
  17. Sanderson, A.C.; Weiss, L.E. Adaptive Visual Servo Control of Robots. In Robot Vision; Pugh, A., Ed.; Springer: Berlin, UK, 1983; pp. 107–116. [Google Scholar]
  18. Wilson, W.J.; Hulls, C.C.W.; Bell, G.S. Relative end-effector control using Cartesian position based visual servoing. IEEE Trans. Robot. Autom. 1996, 12, 684–696. [Google Scholar] [CrossRef]
  19. Kermorgant, O.; Chaumette, F. Combining IBVS and PBVS to ensure the visibility constraint. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2849–2854. [Google Scholar]
  20. Marchand, E.; Spindler, F.; Chaumette, F. ViSP for visual servoing: A generic software platform with a wide class of robot control skills. IEEE Robot. Autom. Mag. 2005, 12, 40–52. [Google Scholar] [CrossRef]
  21. Mezouar, Y.; Chaumette, F. Path planning for robust image-based control. IEEE Trans. Robot. Autom. 2002, 18, 534–549. [Google Scholar] [CrossRef]
  22. Hart, P.; Nilsson, N.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  23. Chaumette, F. Potential problems of stability and convergence in image-based and position-based visual servoing. In The Confluence of Vision and Control; Kriegman, D.J., Hager, G.D., Morse, A.S., Eds.; Springer: London, UK, 1998; pp. 66–78. [Google Scholar]
  24. Bukhori, I.; Ismail, Z.H.; Namerikawa, T. Detection strategy for kidnapped robot problem in landmark-based map Monte Carlo Localization. In Proceedings of the 2015 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Langkawi, Malaysia, 18–20 October 2015; pp. 75–80. [Google Scholar]
  25. Desrochers, B.; Lacroix, S.; Jaulin, L. Set-membership approach to the kidnapped robot problem. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 3715–3720. [Google Scholar]
  26. Majdik, A.; Popa, M.; Tamas, L.; Szoke, I.; Lazea, G. New approach in solving the kidnapped robot problem. In Proceedings of the ISR 2010 (41st International Symposium on Robotics) and ROBOTIK 2010 (6th German Conference on Robotics), Munich, Germany, 7–9 June 2010; pp. 1–6. [Google Scholar]
  27. Ravankar, A.A.; Hoshino, Y.; Ravankar, A.; Jixin, L.; Emaru, T.; Kobayashi, Y. Algorithms and a Framework for Indoor Robot Mapping in a Noisy Environment Using Clustering in Spatial and Hough Domains. Int. J. Adv. Robot. Syst. 2015, 12, 27. [Google Scholar] [CrossRef]
  28. Srkk, S. Bayesian Filtering and Smoothing; Cambridge University Press: New York, NY, USA, 2013. [Google Scholar]
  29. Menegaz, H.M.T.; Ishihara, J.Y.; Borges, G.A.; Vargas, A.N. A Systematization of the Unscented Kalman Filter Theory. IEEE Trans. Autom. Control 2015, 60, 2583–2598. [Google Scholar] [CrossRef]
  30. Scheding, S.; Dissanayake, G.; Nebot, E.M.; Durrant-Whyte, H. An experiment in autonomous navigation of an underground mining vehicle. IEEE Trans. Robot. Autom. 1999, 15, 85–95. [Google Scholar] [CrossRef]
  31. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics (Intelligent Robotics and Autonomous Agents); The MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  32. Hao, Y.; Xiong, Z.; Sun, F.; Wang, X. Comparison of Unscented Kalman Filters. In Proceedings of the 2007 International Conference on Mechatronics and Automation, Harbin, China, 5–8 August 2007; pp. 895–899. [Google Scholar]
  33. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; Wiley-Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  34. Daum, F. Nonlinear filters: Beyond the Kalman filter. IEEE Aerosp. Electron. Syst. Mag. 2005, 20, 57–69. [Google Scholar] [CrossRef]
  35. Ravankar, A.; Ravankar, A.A.; Kobayashi, Y.; Emaru, T. Avoiding blind leading the blind: Uncertainty integration in virtual pheromone deposition by robots. Int. J. Adv. Robot. Syst. 2016, 13, 1–16. [Google Scholar] [CrossRef]
  36. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef]
  37. Mustiere, F.; Bolic, M.; Bouchard, M. Rao-Blackwellised Particle Filters: Examples of Applications. In Proceedings of the 2006 Canadian Conference on Electrical and Computer Engineering, Ottawa, ON, Canada, 7–10 May 2006; pp. 1196–1200. [Google Scholar]
  38. Kurt-Yavuz, Z.; Yavuz, S. A comparison of EKF, UKF, FastSLAM2.0, and UKF-based FastSLAM algorithms. In Proceedings of the 2012 IEEE 16th International Conference on Intelligent Engineering Systems (INES), Lisbon, Portugal, 13–15 June 2012; pp. 37–43. [Google Scholar]
  39. Zhao, S.; Shmaliy, Y.S.; Shi, P.; Ahn, C.K. Fusion Kalman/UFIR Filter for State Estimation with Uncertain Parameters and Noise Statistics. IEEE Trans. Ind. Electron. 2017, 64, 3075–3083. [Google Scholar] [CrossRef]
  40. Ahn, C.K.; Shi, P.; Basin, M.V. Deadbeat Dissipative FIR Filtering. IEEE Trans. Circuits Syst. I Regul. Pap. 2016, 63, 1210–1221. [Google Scholar] [CrossRef]
  41. Pak, J.M.; Ahn, C.K.; Shmaliy, Y.S.; Lim, M.T. Improving Reliability of Particle Filter-Based Localization in Wireless Sensor Networks via Hybrid Particle/FIR Filtering. IEEE Trans. Ind. Inform. 2015, 11, 1089–1098. [Google Scholar] [CrossRef]
  42. Pak, J.M.; Ahn, C.K.; Shi, P.; Shmaliy, Y.S.; Lim, M.T. Distributed Hybrid Particle/FIR Filtering for Mitigating NLOS Effects in TOA-Based Localization Using Wireless Sensor Networks. IEEE Trans. Ind. Electron. 2017, 64, 5182–5191. [Google Scholar] [CrossRef]
  43. Grisetti, G.; Kummerle, R.; Stachniss, C.; Burgard, W. A Tutorial on Graph-Based SLAM. IEEE Intell. Transp. Syst. Mag. 2010, 2, 31–43. [Google Scholar] [CrossRef]
  44. Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
  45. Pioneer P3-DX. Pioneer P3-DX Robot. 2016. Available online: http://www.mobilerobots.com/Mobile_Robots.aspx (accessed on 23 January 2016).
  46. TurtleBot 2. TurtleBot 2 Robot. 2015. Available online: http://turtlebot.com/ (accessed on 10 October 2015).
  47. Ravankar, A.; Ravankar, A.A.; Hoshino, Y.; Emaru, T.; Kobayashi, Y. On a Hopping-points SVD and Hough Transform Based Line Detection Algorithm for Robot Localization and Mapping. Int. J. Adv. Robot. Syst. 2016, 13, 98. [Google Scholar] [CrossRef]
  48. Wikipedia. Microsoft Kinect. 2016. Available online: https://en.wikipedia.org/wiki/Kinect (accessed on 25 November 2016).
  49. UHG-08LX Technical Specifications. UHG-08LX Technical Specifications. 2017. Available online: http://www.robotshop.com/media/files/pdf/hokuyo-uhg-08lx-overview.pdf (accessed on 7 August 2017).
  50. Quigley, M.; Conley, K.; Gerkey, B.P.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009. [Google Scholar]
  51. ViSP. Visp: Visual Servoing Platform. 2017. Available online: https://visp.inria.fr/ (accessed on 1 January 2017).
  52. Spindler, F.; Novotny, F. Visp Auto Tracker. 2017. Available online: http://wiki.ros.org/visp_auto_tracker (accessed on 1 January 2017).
  53. Hartley, R.I.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: New York, NY, USA, 2004; ISBN 0521540518. [Google Scholar]
  54. Zaitsev, S. Jsmn Open Source JSON C Parser, 2015. Available online: https://bitbucket.org/zserge/jsmn/wiki/Home (accessed on 9 October 2015).
  55. Birk, A.; Carpin, S. Merging Occupancy Grid Maps from Multiple Robots. Proc. IEEE 2006, 94, 1384–1397. [Google Scholar] [CrossRef]
  56. Saeedi, S.; Trentini, M.; Li, H. A hybrid approach for multiple-robot SLAM with particle filtering. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 3421–3426. [Google Scholar]
  57. Bonanni, T.M.; Corte, B.D.; Grisetti, G. 3-D Map Merging on Pose Graphs. IEEE Robot. Autom. Lett. 2017, 2, 1031–1038. [Google Scholar] [CrossRef]
  58. Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef]
Figure 1. QR code and camera system for coupling. (a) Robot with fixed QR code and camera for visual servoing; (b) An example of pose estimation from QR-code in visual servoing.
Figure 1. QR code and camera system for coupling. (a) Robot with fixed QR code and camera for visual servoing; (b) An example of pose estimation from QR-code in visual servoing.
Sensors 17 01878 g001
Figure 2. Four steps of hitchhiking: (1) Handshake, (2) Coupling, (3) Navigation, and (4) Decoupling.
Figure 2. Four steps of hitchhiking: (1) Handshake, (2) Coupling, (3) Navigation, and (4) Decoupling.
Sensors 17 01878 g002
Figure 3. Hitchhking is allowed if the common path d h h is longer than a threshold distance ( T d h h ). Point A is the handshake and coupling location, while point B is the decoupling location.
Figure 3. Hitchhking is allowed if the common path d h h is longer than a threshold distance ( T d h h ). Point A is the handshake and coupling location, while point B is the decoupling location.
Sensors 17 01878 g003
Figure 4. Benefit of hitchhiking area with markers. (a) Hitchhiker is stationary while the driver aligns with the marker; (b) Implementation result in real environment.
Figure 4. Benefit of hitchhiking area with markers. (a) Hitchhiker is stationary while the driver aligns with the marker; (b) Implementation result in real environment.
Sensors 17 01878 g004
Figure 5. Intermittent information transfer from driver every δ time-steps to recover from ‘driver-lost’ scenario.
Figure 5. Intermittent information transfer from driver every δ time-steps to recover from ‘driver-lost’ scenario.
Sensors 17 01878 g005
Figure 6. Decoupling process.
Figure 6. Decoupling process.
Sensors 17 01878 g006
Figure 7. Robots used in the experiments. (a) Pioneer P3DX; (b) Kobuki Turtlebot; (c) Motion model of two wheel differential drive robots.
Figure 7. Robots used in the experiments. (a) Pioneer P3DX; (b) Kobuki Turtlebot; (c) Motion model of two wheel differential drive robots.
Sensors 17 01878 g007
Figure 8. Simplified view of ROS topic (green), publisher (red) and subscriber (blue) nodes for module on/off.
Figure 8. Simplified view of ROS topic (green), publisher (red) and subscriber (blue) nodes for module on/off.
Sensors 17 01878 g008
Figure 9. Experiment 1. (a) Environment with new obstacles marked in red; (b) Coupling between the two robots. The ‘+’ marker on pillar is marked in red circle; (c) Turtlebot following P3DX through visual servoing. (Refer the accompanied video provided in the Supplementary Materials section).
Figure 9. Experiment 1. (a) Environment with new obstacles marked in red; (b) Coupling between the two robots. The ‘+’ marker on pillar is marked in red circle; (c) Turtlebot following P3DX through visual servoing. (Refer the accompanied video provided in the Supplementary Materials section).
Sensors 17 01878 g009
Figure 10. Experiment 1. (a) Initial map without obstacles; (b) Updated map with new obstacles. Driver robot transferred the obstacle coordinates (marked in red) to the hitchhiker.
Figure 10. Experiment 1. (a) Initial map without obstacles; (b) Updated map with new obstacles. Driver robot transferred the obstacle coordinates (marked in red) to the hitchhiker.
Sensors 17 01878 g010
Figure 11. Experiment 2. (a) Environment with new obstacles marked in red; (b) Coupling between the two robots using left wall as marker (c) Turtlebot following P3DX through visual servoing. (Refer the accompanied video provided in the Supplementary Materials section).
Figure 11. Experiment 2. (a) Environment with new obstacles marked in red; (b) Coupling between the two robots using left wall as marker (c) Turtlebot following P3DX through visual servoing. (Refer the accompanied video provided in the Supplementary Materials section).
Sensors 17 01878 g011
Figure 12. Experiment 2. (a) Initial map without obstacles; (b) Updated map with new obstacles. Driver robot transferred the obstacle coordinates (marked in red) to the hitchhiker.
Figure 12. Experiment 2. (a) Initial map without obstacles; (b) Updated map with new obstacles. Driver robot transferred the obstacle coordinates (marked in red) to the hitchhiker.
Sensors 17 01878 g012
Figure 13. Coupling, decoupling, and hitchhiker’s waiting time in different experiments done with/without new obstacles.
Figure 13. Coupling, decoupling, and hitchhiker’s waiting time in different experiments done with/without new obstacles.
Sensors 17 01878 g013
Figure 14. Top view of the 3D point cloud map in Experiment 2. (a) Initial map without obstacles; (b) Map with obstacles.
Figure 14. Top view of the 3D point cloud map in Experiment 2. (a) Initial map without obstacles; (b) Map with obstacles.
Sensors 17 01878 g014
Figure 15. Octomap of the point cloud map shown in Figure 14b.
Figure 15. Octomap of the point cloud map shown in Figure 14b.
Sensors 17 01878 g015
Table 1. New obstacle info transferred to the hitchhiker.
Table 1. New obstacle info transferred to the hitchhiker.
ObstaclesExperiment 1Experiment 2
(x, y)(Width × Breadth)(x, y)(Width × Breadth)
Obstacle 1(231, 152)( 7 × 4 )(394, 685)( 11 × 11 )
Obstacle 2(308, 129)( 10 × 11 )(493, 650)( 14 × 11 )
Obstacle 3(376, 150)( 11 × 7 )(632, 698)( 10 × 12 )
Obstacle 4(795, 646)( 21 × 10 )
Obstacle 5(931, 682)( 17 × 11 )
Obstacle 6(1075, 654)( 15 × 10 )
Table 2. Average time of the hitchhiking components.
Table 2. Average time of the hitchhiking components.
ExpObstaclesTime toTime toWaiting TimeDelay ofDelay of
Yes/NoCoupleDecoupleof HitchhikerDriverHitchhiker
Exp 1No15.00 s17.67 s19.33 s32.67 s52.00 s
Yes15.33 s22.00 s21.67 s37.33 s59.00 s
Exp 2No33.00 s19.00 s11.33 s52.00 s63.33 s
Yes35.00 s21.67 s10.67 s56.67 s67.33 s
Table 3. Modules run with and without hitchhiking.
Table 3. Modules run with and without hitchhiking.
SchemeRobotPPOBSLZNMAPCOMVS
TraditionalR1OnOnOnOnOnOff
R2OnOnOnOnOnOff
HitchhikingR1OnOnOnOnOnOff
R2OffOffOffOffOnOn
PP: Path Planning, OBS: Obstacle Avoidance, LZN: Localization, MAP: Mapping, VS: Visual Servoing, COM: Communication.

Share and Cite

MDPI and ACS Style

Ravankar, A.; Ravankar, A.A.; Kobayashi, Y.; Emaru, T. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments. Sensors 2017, 17, 1878. https://doi.org/10.3390/s17081878

AMA Style

Ravankar A, Ravankar AA, Kobayashi Y, Emaru T. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments. Sensors. 2017; 17(8):1878. https://doi.org/10.3390/s17081878

Chicago/Turabian Style

Ravankar, Abhijeet, Ankit A. Ravankar, Yukinori Kobayashi, and Takanori Emaru. 2017. "Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments" Sensors 17, no. 8: 1878. https://doi.org/10.3390/s17081878

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop