Abstract: This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.
This is an open access article distributed under the
Creative Commons Attribution License which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
Export to BibTeX
MDPI and ACS Style
Losada, C.; Mazo, M.; Palazuelos, S.; Pizarro, D.; Marrón, M. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots. Sensors 2010, 10, 3261-3279.
Losada C, Mazo M, Palazuelos S, Pizarro D, Marrón M. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots. Sensors. 2010; 10(4):3261-3279.
Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta. 2010. "Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots." Sensors 10, no. 4: 3261-3279.