Intelligent Robot Guidance in Fixed External Camera Network for Navigation in Crowded and Narrow Passages †

Autonomous indoor service robots use the same passages which are used by people for navigation to specific areas. These robots are equipped with visual sensors, laser or sonar based range estimation sensors to avoid collision with obstacles, people, and other moving robots. However, these sensors have a limited range and are often installed at a lower height (mostly near the robot base) which limits the detection of far-off obstacles. In addition, these sensors are positioned to see forward, and robot is often ’blind’ about objects (ex. people and robots) moving behind the robot which increases the chances of collision. In places like warehouses, the passages are often narrow which can cause deadlocks. We propose to use a network of external cameras fixed on the ceiling (ex. surveillance cameras) to guide the robots by informing about moving obstacles from behind and far-off regions. This enables the robot to have a ’birds-eye view’ of the navigation space which enables it to take decisions in real-time to avoid the obstacles efficiently. The camera sensor network is also able to notify the robots about moving obstacles around blind-turns. A mutex based resource sharing scheme in camera sensor network is proposed which allows multiple robots to intelligently share narrow passages through which only one of the robots/person can pass at a given time. Experimental results in simulation and real scenarios show that the proposed method is effective in robot navigation in crowded and narrow passages.


Introduction
To set the scene for this paper, consider the following examples explaining the two common problems faced by mobile service robots: • Example 1: Service robots often have sensors like cameras to perceive the external world and perform tasks like collision avoidance.Most of the times these sensors are 'forward' facing.
As shown in Figure 1a, this causes the robot to be unaware of the movement of people or other robots in the back.Similarly, they are attached at low height due to which the robot cannot see far off obstacles, particularly in case of occlusion which is also shown in Figure 1a.The mobile robot must change its trajectory according to the movement of people approaching the robot from the back.Similarly, the robot could be in a better position to plan a trajectory if it could also get information about far off entities in the environment.
• Example 2: To maximize area utilization, most of the passages of warehouses, libraries etc. are too narrow.While people are known to demonstrate flexibility to cross-over in such narrow passages, most of the mobile robots are rigid for such crossover to successfully occur in narrow passages.As shown in Figure 1b, robot R1 and R2 are in a deadlock condition and either has to retreat to make way for another robot.The problem is more complex if movement of multiple robots and people is also considered.In Figure 1b, it is possible for robots R3, R4, or R5 and some person to try to access the narrow passage.In the absence of a policy to resolve conflicts, the robots might be perpetually retreating which is inefficient.The policy must first avoid such conflicts by careful planning, and if the conflicts occur then they must be resolved intelligently by taking into account many important factors like the power availability of the robots and the priority of the task undertaken.Regarding the state of the art, using external cameras with robots has been proposed earlier but with application to localization [1], automatic calibration [2], and pose estimation [3].In the current work, we discuss the case of using external camera sensor network for sharing narrow paths, and providing visual information to robot about blind spots for trajectory planning and path planning.

Main Idea of This Paper
The main idea of the paper is that a network of external camera sensors can help the robots to resolve the aforementioned (and many other) problems.Regarding the first problem, an external camera can provide a 'birds eye view' of the world to the robots (Figure 2a) which can use this information to perceive movement of people in the back and carefully change its trajectory to make way for the people to avoid accidents.The birds eye view can also provide information about far off obstacles.The same camera sensor network can also be used to design a shared resource allocator for multiple robots (Figure 2b).The shared resource in the context of the second problem is the narrow passage.We have designed a modified priority-queue based allocator for multiple robots which intelligently avoids conflict and allots the path to the most appropriate robot considering factors like the power availability of the robots and the priority of the task undertaken.

Graph Representation of Environment
In this section we describe the architecture of the system and define terms used in the rest of the paper.It is convenient to represent an environment comprising of cameras, processing boards, pathways, and direction of flow of people in the passages, into a node map, which is essentially a directional graph with nodes and links, which are defined as below.

•
Node: A node comprises of a processing board with camera attached to it.Depending upon the processing capability of the board, multiple cameras could be attached to the same board.

•
Links: A link connects two nodes and represents the spatial path between two processing boards (or nodes).Links could be directional, representing not only the vector of traffic flow, but also the vector of source node and destination node in node communication.

Modified Priority Queue
The resource allocator maintains a database of the power required by the robots to do various tasks.Each task is given a unique ID (T i ) and task priority (T P ).Tasks related to security like surveillance and patrolling are given high priorities, whereas tasks related to cleaning, etc. are given lower priorities, as summarized in Table 1.A robot records its battery status before the task and when the task is finished.The power (P i ) required to perform the i th task is calculated as, P i = P before i th task − P after i th task . ( P i for a task T i can be different for each run and an effective power (P E(i) ) for the i th task is calculated.Similarly, a robot can be instructed to perform a series of ('m') tasks, and a total effective power is calculated as, Robots which want to access the narrow passage sends a request to the node.Requests from multiple robots are maintained in a modified queue [4].The queue is a 'modified' queue in the sense that it differs from the traditional 'first-in first-out' queue, but maintains a 'score' which determines which robot needs to be allotted to the resource.Request from the robot comes in the form of a key-value pair of the robot-id (R i ), and a corresponding vector (Λ i ) which comprises of the task priority (T pi ), and current power level (P li ).Hence, a request message is a dictionary key-value pair of {R i : Λ i }, where, The allocator calculates a score from all robot requests ({R i : Λ i }).A score determines which robot needs to be given preference to have access to the narrow path.Robots which have high task priorities needs to be prioritized.Similarly, robots with low battery power may get stuck in the path thereby blocking it and must be prioritized too.Therefore, a score is calculated as, where, W P and W T are the weighing coefficients for the power preference, and the task-priority preference, respectively.The coefficients increase or decrease the effect of the power left and the task priorities on final priority score.For small robots with limited power and fast discharging, higher W P value, and in conditions where task priorities must definitely be obeyed, a higher W T value will affect the priority score, as desired.
Listing 1: An Example of Robot's Request in JSON Format.
1 { "NodeMsg" : { // node message 2 " robotID " : " 0 3 " , // id o f t r i g g e r node 3 " time " : " < timestamp >" , // unix timestamp 4 " pathID " : " AtoB " , // ID o f t h e path t o a c c e s s 5 " params " : [ // Robot ' s parameters 6 { " power " : " < value >" } , // Power Remaining 7 { " taskID " : " 1 " } , // ID o f t h e t a s k i n o p e r a t i o n 8 { • • • } ] // o t h e r parameters 9 } } However, a robot may be in emergency condition where the battery power is just about to go off.Such 'emergency' situations needs to be handled separately as a service robot whose battery is completely discharged may stop in the middle of the passage, which may be an obstruction for other robots, or may even permanently block the way if the passage is narrow.Such a situation can be avoided by a service robot by requesting a quick access to the resource when the battery power level is below a certain threshold (P TH ).Hence, in such 'emergency' cases, the robot request for access to the path is given the highest priority and hence the score is set to a maximum possible value.The priority score is calculated as, A score is calculated for all the request messages in the queue from {R i : Λ i }, and the messages are sorted according to the score in the queue.The message with the highest score is prioritized.Hence, unlike a traditional first-in first-out queue, requests can be processed from any position of the queue.If two requests have the same score, then, the earlier request is processed first, like in a normal queue.In case of multiple camera nodes at either ends of the passage, a common priority queue is maintained and updated by each node.

Path Allocation Process
Figure 3 shows the flowchart of the narrow path allocation.Each camera node has image processing modules to detect motion, extract blobs and match templates.If a robot is detected and there is a request from the robot, there is a handshake between the robot and the camera node.To avoid message lost scenarios, the node tries to receive the message several times.A sample request message is shown in Listing 1.The request is appended in the modified priority queue, a score is calculated, and the priority queue is sorted.Resource sharing is done via mutex mechanism which provides mutual exclusion, i.e., only one robot can have the key (mutex) and proceed on the narrow path.As long as the mutex is locked, other robots needs to wait.Other approaches like semaphores can also be used.A mutex is maintained and if the score is high a check is performed if the mutex is locked or not.If not, the robot with high score is assigned the key and access to the path and the mutex is locked until the robot has traversed the path.If the mutex is locked, the other robots wait until it is available.
If a person is detected and the path is not being accessed the mutex is immediately locked.If the mutex was locked, a check is performed if the direction of person's movement is same or opposite to that of the robot accessing the path.If the direction is same, then the robot is notified and instructed to speedup.If the directions are opposite, then the robot is notified and instructed to stop or retreat depending on the width of the path.

Experimental Results
We implemented the proposed system using three nodes in a 'T' shaped environment shown in Figure 4b.The node map of the test environment is shown in Figure 4c.The configuration of each node is as shown in Figure 4a.In our implementation, a node comprised of a Raspberry Pi board which features a 700 MHz Low Power ARM11 processor with 512MB of SDRAM, and a webcam attached to it.It has a 10/100 BaseT Ethernet socket but no wifi capabilities, so we used an external wifi adapter.All the nodes were assigned a unique IP address (as shown in Figure 4c), and could communicate with each other and the robot in vicinity.The robots used were: (a) Kobuki Turtlebot [5]; and (b) Pioneer P3DX [6].We performed tests for both person detection and robot detection being the trigger event.Each node ran three modules: (1) Face detection (to identify person); (2) Robot detection; and (3) Allocator module with the modified priority queue.Figure 5a shows the person around node N a .Figure 5b,c shows the background difference image [7] and threshold image, respectively.Similarly, Figure 6a shows the appearance of a robot in the FOV of node N b .Figure 6b,c shows the background difference and threshold images, respectively.As soon as the person is detected, the mutex was locked and the robot was made to wait until the path was free, after which, the robot resumed motion on the path.Figure 7a shows the case of occlusion of the field of view by a person from robot's perspective.However, image from the external camera viewpoint shown in Figure 7b clearly shows objects ahead which were extracted by the robot.Similarly, Figure 7c shows a person detected behind the robot and notified to the robot.Figure 7d shows that the robot shifted to the right to consider motion of the person from behind to avoid being a hindrance and prevent any accident.Figure 8 show the simulation results with the proposed resource allocator.In Figure 8a, robot R1 takes the shortest path from start location 'S' to the goal 'G'.However, in case of Figure 8b, with path locked by R2 and R3, R4, R5 in queue, R1 plans a long route SPQG which is more efficient than waiting long.

Conclusions
Our results show that robots can benefit from external sensor networks in which they operate.Vision is a powerful information and most of the public places like hospitals, universities, and airports already have a large network of surveillance cameras installed.Therefore, there is no need to specially install a new infrastructure and there are cost benefits.The main contribution of the proposed idea is that the robots operating in the sensor network are no longer limited to the attached sensor specifications.Rather, robots can access a rich content of information from the sensor network for better navigation.The proposed idea is not limited to vision sensors, and robots can access a wide range of relevant information from different types of sensors to perform their tasks more efficiently.

Figure 1 .
Figure 1.Two common problems of service mobile robots.(a) Robots are mainly unaware of the moving entities like people/robots behind or far which can cause accidents; (b) Problem of deadlock in narrow passages.Either robot R1 or R2 has to retreat to make way for the other robot.

Figure 2 .
Figure 2. Robots in camera sensor network.(a) Cameras providing birds eye view of the environment to the robots; (b) Path allocation by allocator considering factors like task priority and available power; (c) Requests from robots to access path are saved in a queue from which a score is calculated.

Figure 3 .
Figure 3. Flowchart of path allocation to robots.

Figure 4 .
Figure 4. Experiment setup.(a) Node comprising of a Raspberry Pi board with camera; (b) 'T' shaped experimental passage; (c) Graph representation.

Figure 7 .
Figure 7. (a) Robot's camera view in which person occludes the robot in front; (b) External camera view which enables to see far off objects like robot; (c) Robot moving straight; (d) Robot turning right to make way for person approaching from back.

Figure 8 .
Figure 8. Path planning in sensor network from start 'S' to goal 'G' location.(a) Robot R1 takes the shortest route via narrow passage when free; (b) With path locked by R2 and R3, R4, R5 in queue, R1 plans a long route SPQG which is more efficient than waiting long.

Table 1 .
Database of Tasks, Task ID, Task Priority, and Required Power.
i ) Task Priority (T p ) Required Power(P i )