Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas
2. The URUS Project
2.1. Objectives of the URUS Project
2.2. Project Participants
- Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Barcelona, Spain
- Laboratoire d’Analyse et d’Architecture des Systèmes, CNRS, Toulouse, France
- Swiss Federal Institute of Technology Zurich, Switzerland
- Asociación de Investigación y Cooperación Industrial de Andalucía, Seville, Spain
- Scuola Superiore di Studi Universitari e di Perfezionamento Sant’Anna, Pisa, Italy
- Universidad de Zaragoza, Spain
- Institute for Systems and Robotics, Instituto Superior Técnico, Lisbon, Portugal
- University of Surrey, Guildford, UK
- Urban Ecology Agency of Barcelona, Spain
- Telefónica I + D, Spain
- RoboTech, Italy
2.3. Barcelona Robot Lab
3. The URUS Architecture
- Environment Layer:
- – The networked cameras oversee the environment and are connected through a Gigabit connection to a rack of servers.
- – The wireless Zigbee sensors all communicate to a single subsystem which is also connected to the system through one computer.
- – The WLAN environment antennas are connected through the Gigabit connection to the rack servers.
- – People use devices to connect to the robots and the environment sensors. For instance, a mobile phone with PDA features is connected through GSM/3G to the system.
- Robot Sensor Layer:
- – The robots have their own sensors connected through proprietary networks (usually Ethernet) which are connected through WLAN and GSM/3G to the system. A proprietary communications service has been developed to transparently switch between WLAN and 3G depending on network availability.
- Server Layer:
- – The server rack (8 servers with 4 cores each) are connected through Ethernet to the Environment Layer and the Robot Sensor Layer.
4. Sensors in the Urban Site
4.1. Camera Network
4.2. Mica2 Network
4.3. Site Map
5. Sensors Included in Urban Robots
5.1. Sensors in the Robots Tibi and Dabo - Architecture and Functionalities
- Sensors for navigation: Two Leuze RS4 and one Hokuyo laser rangefinders, as well as Segway’s own odometric sensors (encoders and IMU).
- – The first Leuze rangefinder is located in the bottom front with a 180° horizontal view. This sensor is used for localization, security and navigation.
- – The second Leuze rangefinder is located in the bottom back, also with a 180° horizontal view. This sensor is used for localization, security and navigation.
- – The Hokuyo rangefinder is located in the front, but placed vertically about the robot chest, also with a 180° field of view. It is used for navigation and security.
- Sensors for global localization: GPS and compass.
- – The GPS is used for low resolution global localization, and can only be used in open areas where several satellites are visible. In particular, for the URUS scenario, this type of sensor has very limited functionality due to loss of line of sight to satellites from building structures.
- – The compass is used also used for recovering robot orientation. Also, in the URUS scenario, this sensor has proven of limited functionality due to its large uncertainty in the presence of metallic structures.
- Sensors for map building: One custom built 3D range scanner and two cameras.
- – The two cameras are located to the sides of the robots and facing front to ensure a good baseline for stereo triangulation, and they are used for map building in conjunction with the laser sensors. These cameras can also be used for localization and navigation.
- – A custom built 3D laser range finder unit has been developed in the context of the project. This unit, placed on top of a Pioneer platform has been used to register finely detailed three dimensional maps of the Barcelona Robot Lab that allow localization, map building, traversability computation, and calibration of the camera sensor network.
- – Vision sensors: One Bumblebee camera sensor.
- – The Bumblebee camera sensor is a stereo-vision system that is used for detection, tracking and identification of robots and human beings. Moreover we use this camera as image supplier for robot teleoperation.
- Tactile display:
- – The tactile display is used for Human Robot Interaction (HRI), to assist people and to display information about the status of the robot, as well as task specific information, such as destination information during a guidance service.
5.2. Romeo Sensors—Architecture and Functionalities
- Odometric sensors: Romeo has wheel encoders for velocity estimation, and a KVH Industries’ gyroscope and an Inertial Measurement Unit (IMU) for angular velocity estimation.
- Rangefinders: Romeo has one SICK’s LMS 220-30106 laser rangefinder located in the frontal part of the robot, at a height of 95 cm, for obstacle avoidance and localization. Moreover, it has 2 Hokuyo’s URG-04LX (low range, up to 4 meters) in the back for backwards perception, and 1 Hokuyo’s UTM-30LX (up to 30 meters) at the top of Romeo’s roof and tilted for 3D perception.
- Novatel’s OEM differential GPS receiver.
- Firewire color camera, which can be used for person tracking and guiding.
- Tactile screen, which is used for robot control and for human-robot interaction.
5.3. ISTRobotNet Architecture and Functionalities
6. Decentralized Sensor Fusion for Robotic Services
7. Software Architecture to Manage Sensors Networks
8. Some Results in the URUS Project
- Propagation: All particles are propagated using the kinematic model of the robot and the odometric observation.
- Correction: Particle weights are updated according to the likelihood of the particle state given the observations, k = 1. . . NB:
Integration of Asynchronous Data
8.2. Tracking People and Detecting Gestures Using the Camera Network of Barcelona Robot Lab
8.2.1. The Method
8.2.2. Forming Temporal Links between Cameras
8.2.3. Modelling Color Variations
8.2.4. Calculating Posterior Appearance Distributions
8.2.5. Classification of Objects of Interest as Person or Robot
8.2.6. Gesture Detection
- Local temporal consistency of flow-based features . This approach relies on a qualitative representation of body parts’ movements in order to build the model of waving patterns. Human activity is modeled using simple motion statistics information, not requiring the (time-consuming) pose reconstruction of parts of the human body. We use focus of attention (FOA) features , which compute optical flow statistics with respect to the target’s centroid. In order to detect waving activities at every frame, a boosting algorithm uses labeled samples of FOA features in a binary problem: waving vs not waving. We use the Temporal Gentleboost algorithm , which improves boosting performance by adding a new parameter to the weak classifier: the (short-term) temporal support of the features. We improve the noise robustness of the boosting classification by defining a waving event, which imposes the occurrence of a minimum number of single-frame waving classifications in a suitably defined temporal window.
- Scale Invariant Mined Dense Corners Method. The generic human action detector  utilizes an over complete set of features, that are data mined to find the optimal subset to represent an action class.Space-time features have shown good performance for action recognition [41, 42]. They can provide a compact representation of interest points, with the ability to be invariant to some image transformations. While many are designed to be sparse in occurrence , we use dense simple 2D Harris corners . While sparsity makes the problem tractable, it is not necessarily optimal in terms of class separability and classification.The features are detected in (x,y), (x,t) and (y,t) channels in the sequences at multiple image scales. This provides information on spatial and temporal image changes but is a far denser detection rate than 3D Harris corners  and encodes both spatial and spatio-temporal aspects of the data. The over complete set of features are then reduced through the levels of mining. Figure 23 shows the large amount of corners detected on two frames.
8.2.7. Neighbourhood Grouping
8.2.8. Fixed Camera Experiments
8.2.9. Classification of Robots and Humans
8.2.10. Gesture Detection with Local Temporal Consistency of Flow-based Features
8.3. Tracking with Mica2 Nodes
Decentralized Tracking with Cameras and Wireless Sensor Network
9. Lessons Learned
- Sanfeliu, A.; Andrade-Cetto, J. Ubiquitous networking robotics in urban settings. Proceedings of IROS Workshop Netw. Rob. Syst, Beijing, China, October 15, 2006; pp. 14–18.
- Capitán, J.; Mantecón, D.; Soriano, P.; Ollero, A. Autonomous perception techniques for urban and industrial fire scenarios. Proceedings of IEEE Int. Workshop Safety, Secur. Rescue Rob, Roma, Italy, September 27–29, 2007; pp. 1–6.
- Grime, S.; Durrant-Whyte, H.F. Data fusion in decentralized sensor networks. Control Eng. Practice 1994, 2, 849–863. [Google Scholar]
- Sukkarieh, S.; Nettleton, E.; Kim, J.H.; Ridley, M.; Goktogan, A.; Durrant-Whyte, H. The ANSER project: Data fusion across multiple uninhabited air vehicles. Int. J. Robot. Res 2003, 22, 505–539. [Google Scholar]
- Barbosa, M.; Ramos, N.; Lima, P. Mermaid - Multiple-robot middleware for intelligent decision-making. Proceedings of the 6th IFAC/EURON Sym. Intell. Auton. Vehicles, Toulouse, France, September 3–5, 2007.
- Metta, G.; Fitzpatrick, P.; Natale, L. Yarp: Yet another robot platform. Int. J Adv. Robotic Syst 2006, 3, 43–48. [Google Scholar]
- Valencia, R.; Teniente, E.; Trulls, E.; Andrade-Cetto, J. 3D Mapping for urban serviece robots. Proceedings of IEEE /RSJ Int. Conf. Intell. Robots Syst, Saint Louis, MO, USA, October 11–15, 2009; pp. 3076–3081.
- Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, UK, 2005. [Google Scholar]
- Ortega, A.; Haddad, I.; Andrade-Cetto, J. Graph-based segmentation of range data with applications to 3D urban mapping. Proceedings of European Conf. Mobile Robotics, Mlini, Croatia, September 23–25, 2009; pp. 193–198.
- Ortega, A.; Dias, B.; Teniente, E.; Bernardino, A.; Gaspar, J.; Andrade-Cetto, J. Calibrating an Outdoor Distributed Camera Network using Laser Range Finder Data. Proceedings of IEEE /RSJ Int. Conf. Intell. Robots Syst, Saint Louis, MO, USA, October 11–15, 2009; pp. 303–308.
- Ila, V.; Porta, J.M.; Andrade-Cetto, J. Information-based compact Pose SLAM. IEEE Trans. Robot 2010, 26, 78–93. [Google Scholar]
- Ila, V.; Porta, J.; Andrade-Cetto, J. Reduced state representation in delayed State SLAM. Proceedings of IEEE /RSJ Int. Conf. Intell. Robots Syst, Saint Louis, MO, USA, October 11–15, 2009; pp. 4919–4924.
- Eustice, R.M.; Singh, H.; Leonard, J.J. Exactly sparse delayed-state filters for view-based SLAM. IEEE Trans. Robot 2006, 22, 1100–1114. [Google Scholar]
- Konolige, K.; Agrawal, M.; Solà, J. Large scale visual odometry for rough terrain. Proceedings of the 13th Int. Sym. Robot. Res, Hiroshima, Japan, November 26–29, 2007.
- Ila, V.; Andrade-Cetto, J.; Sanfeliu, A. Outdoor delayed-state visually augmented odometry. Proceedings of the 6th IFAC/EURON Sym. Intell. Auton. Vehicles, Toulouse, France, September 3–5, 2007.
- Uhlmann, J. Introduction to the algorithmics of data association in Multiple-Target Tracking. In Handbook of Multisensor Data Fusion; Liggins, M.E., Hall, D.E., Llinas, J., Eds.; CRC Press: Boca Raton, FL, USA, 2001. [Google Scholar]
- Corominas-Murtra, A.; Mirats-Tur, J.; Sanfeliu, A. Action evaluation for mobile robot global localization in cooperative environments. Robot. Auton. Syst 2008, 56, 807–818. [Google Scholar]
- Mirats-Tur, J.; Zinggerling, C.; Corominas-Murtra, A. Geographical information systems for map based navigation in urban environments. Robot. Auton. Syst 2009, 57, 922–930. [Google Scholar]
- Bradski, G. Computer vision face tracking for use in a perceptual user interface. Intel Techn. J 1998, 1–15. [Google Scholar]
- Viola, P.; Jones, M. Robust real-time face detection. Int. J. Comput. Vision 2004, 57, 137–154. [Google Scholar]
- Gonçalves, N.; Sequeira, J. Multirobot task assignment in active surveillance. Proceedings of the 14th Portuguese Conf. Artificial Intell, Aveiro, Portugal, October 12–15, 2009; 5816, pp. 310–322.
- Spaan, M.; Gonçalves, N.; Sequeira, J. Multirobot Coordination by Auctioning POMDPs. Proceedings of IEEE Int. Conf. Robot. Automat, Anchorage, AK, USA, May 3–8, 2010. (to appear)..
- Kaelbling, L.; Littman, M.; Cassandra, A. Planning and acting in partially observable stochastic domains. Artif. Intell 1998, 101, 99–134. [Google Scholar]
- Pahliani, A.; Spaan, M.; Lima, P. Decision-theoretic robot guidance for active cooperative perception. Proceedings of IEEE /RSJ Int. Conf. Intell. Robots Syst, Saint Louis, MO, USA, October 11–15, 2009; pp. 4837–4842.
- Nettleton, E.; Thrun, S.; Durrant-Whyte, H.; Sukkarieh, S. Decentralised SLAM with low-bandwidth communication for teams of vehicles. In Field and Service Robots, Recent Advances in Research and Applications; Springer: Berlin, Germany, 2003; Volume 24, pp. 179–188. [Google Scholar]
- Capitán, J.; Merino, L.; Caballero, F.; Ollero, A. Delayed-state information filter for cooperative decentralized tracking. Proceedings of IEEE Int. Conf. Robot. Automat, Kobe, Japan, May 12–17, 2009; pp. 3865–3870.
- Bourgault, F.; Durrant-Whyte, H. Communication in general decentralized filters and the coordinated search strategy. Proceedings of the 7th Int. Conf. Information Fusion, Stockholm, Sweden, June 28–July 1, 2004; pp. 723–730.
- Lima, P.; Messias, J.; Santos, J.; Estilita, J.; Barbosa, M.; Ahmad, A.; Carreira, J. ISocRob 2009 team description paper. Proceedings of Robocup Sym, Graz, Austria, June 25–July 5, 2009.
- Corominas, A.; Mirats, J.; Sandoval, O.; Sanfeliu, A. Real-time software for mobile robot simulation and experimentation in cooperative environments. Proceedings of the 1st Int. Conf. Simulation, Modelling, Programming Autonomous Robots, Venice, Italy, November 3–7, 2008; 5325, pp. 135–146.
- Corominas-Murtra, A.; Mirats-Tur, J.; Sanfeliu, A. Efficient active global localization for mobile robots operating in large and cooperative environments. Proceedings of IEEE Int. Conf. Robot. Automat, Pasadena, CA, USA, May 19–23, 2008; pp. 2758–2763.
- Fox, D.; Burgard, W.; Kruppa, H.; Thrun, S. A probabilistic approach to collaborative multi-robot localization. Auton. Robot 2000, 8, 325–344. [Google Scholar]
- Corominas-Murtra, A.; Mirats-Tur, J.; Sanfeliu, A. Integrating asynchronous observations for mobile robot position tracking in cooperative environments. Proceedings of IEEE /RSJ Int. Conf. Intell. Robots Syst, Saint Louis, MO, USA, October 11–15, 2009; pp. 3850–3855.
- Gilbert, A.; Illingworth, J.; Bowden, R. Scale invariant action recognition using compound features mined from dense spatio-temporal corners. Proceedings of the 10th European Conf. Comput. Vision, Marseille, France, October 12–18, 2008; 5302, pp. 222–233.
- Kaew-Trakul-Pong, P.; Bowden, R. A real-time adaptive visual surveillance system for tracking low resolution colour targets in dynamically changing scenes. Image Vision Comput 2003, 21, 913–929. [Google Scholar]
- Figueira, D.; Moreno, P.; Bernardino, A.; Gaspar, J.; Santos-Victor, J. Optical flow based detection in mixed human robot environments. Proceedings of the 5th Int. Sym. Visual Computing, Las Vegas, NV, USA, November 30–December 2, 2009; 5875, pp. 223–232.
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. Proceedings of the 19th IEEE Conf. Comput. Vision Pattern Recog, San Diego, CA, USA, June 20–25, 2005; pp. 886–893.
- Dalal, N.; Triggs, B.; Schmid, C. Human detection using oriented histograms of flow and appearance. Proceedings of the 9th European Conf. Comput. Vision, Graz, Austria, May 7–13, 2006; 3951, pp. 428–441.
- Moreno, P.; Bernardino, A.; Santos-Victor, J. Waving detection using the local temporal consistency of flow-based features for real-time applications. Proceedings of the 6th Int. Conf. Image Anal. Recog, Halifax, Canada, June 6–8, 2009; 5627, pp. 886–895.
- Pla, F.; Ribeiro, P.; Santos-Victor, J.; Bernardino, A. Extracting motion features for visual human activity representation. Proceedings of the 2nd Iberian Conf. on Pattern Recognition and Image Analysis, Estoril, Portugal, June 7–9, 2005; 3522, pp. 537–544.
- Ribeiro, P.; Moreno, P.; Santos-Victor, J. Boosting with temporal consistent learners: An application to human activity recognition. Proceedings of the 3rd Int. Sym. Visual Computing, Lake Tahoe, NV, November 26–28, 2007; 4841, pp. 464–475.
- Schuldt, A.; Laptev, I.; Caputo, B. Recognizing human actions: A local SVM approach. Proceedings of the 17th IAPR Int. Conf. Pattern Recog, Cambridge, UK, August 23–26, 2004; 3, pp. 32–36.
- Dollar, P.; Rabaud, V.; Cottrell, G.; Belongie, S. Behavior recognition via sparse spatio-temporal features. Proceedings of the 14th Int. Conf. Comput. Communications and Networks, San Diego, CA, USA, October 17–19, 2005; pp. 65–72.
- Harris, C.G.; Stephens, M. A combined corner edge detector. Proceedings of Alvey Vision Conf, Manchester, UK, August 31–September 2, 1988; pp. 189–192.
- Laptev, I.; Marszalek, M.; Schmid, C.; Rozenfeld, B. Learning realistic human actions from movies. Proceedings of the 22nd IEEE Conf. Comput. Vision Pattern Recog, Anchorage, AL, USA, June 24–26, 2008; pp. 1–8.
- Agrawal, R.; Srikant, R. Fast algorithms for mining association rules in large databases. Proceedings of the 20th Int. Conf. Very Large Data Bases, Santiago de Chile, Chile, September 12–15, 1994; pp. 487–499.
- Caballero, F.; Merino, L.; Gil, P.; Maza, I.; Ollero, A. A probabilistic framework for entire WSN localization using a mobile robot. Robot. Auton. Syst 2008, 56, 798–806. [Google Scholar]
- Gilbert, A.; Illingworth, J.; Capitán, J.; Bowden, R.; Merino, L. Accurate fusion of robot, camera and wireless sensors for surveillance applications. Proceedings of the 9th IEEE Int. Workshop Visual Surveillance, Kyoto, Japan, October 3, 2009.
|Data amount (days)|
© 2010 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
Share and Cite
Sanfeliu, A.; Andrade-Cetto, J.; Barbosa, M.; Bowden, R.; Capitán, J.; Corominas, A.; Gilbert, A.; Illingworth, J.; Merino, L.; Mirats, J.M.; Moreno, P.; Ollero, A.; Sequeira, J.; Spaan, M.T.J. Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas. Sensors 2010, 10, 2274-2314. https://doi.org/10.3390/s100302274
Sanfeliu A, Andrade-Cetto J, Barbosa M, Bowden R, Capitán J, Corominas A, Gilbert A, Illingworth J, Merino L, Mirats JM, Moreno P, Ollero A, Sequeira J, Spaan MTJ. Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas. Sensors. 2010; 10(3):2274-2314. https://doi.org/10.3390/s100302274Chicago/Turabian Style
Sanfeliu, Alberto, Juan Andrade-Cetto, Marco Barbosa, Richard Bowden, Jesús Capitán, Andreu Corominas, Andrew Gilbert, John Illingworth, Luis Merino, Josep M. Mirats, Plínio Moreno, Aníbal Ollero, João Sequeira, and Matthijs T. J. Spaan. 2010. "Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas" Sensors 10, no. 3: 2274-2314. https://doi.org/10.3390/s100302274