^{1}

^{2}

^{1}

^{*}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

Structured light is a perception method that allows us to obtain 3D information from images of the scene by projecting synthetic features with a light emitter. Traditionally, this method considers a rigid configuration, where the position and orientation of the light emitter with respect to the camera are known and calibrated beforehand. In this paper we propose a new omnidirectional structured light system in flexible configuration, which overcomes the rigidness of the traditional structured light systems. We propose the use of an omnidirectional camera combined with a conic pattern light,

In computer vision, one of the most important goals is to obtain 3D information from the scene. This problem has been studied for many years [

In this paper, we explore a new configuration for structured light systems, where both components, the camera and the light emitter are free to move in the space. We call this a non-rigid or flexible configuration. To our knowledge, a non-rigid configuration has only been used before in [

Hence, in this work we present a novel omnidirectional structured light approach with a totally free motion of the conic pattern light emitter. In

The remaining sections are organized as follows. In Section 2 the problem is formulated. We present the camera and laser models used as well as the conic correspondence condition. In Section 3 the 3D information of the projection plane is computed. In Section 4 several simulations and experiments are shown. Finally, conclusions and remarks are given in Section 5.

In order to compute the depth of the scene we require to solve a conic reconstruction problem. In [

We use the sphere model for catadioptric projection introduced by Geyer and Daniilidis in [

Its corresponding inverse function is ^{−1}, which maps image points q into oriented 3D rays:

The projector model is the same as the pin-hole camera, since the projector can conceptually be regarded as an inverse camera, projecting rays on the scene. Its ^{T}Cx = 0, where C is a symmetric 3 × 3 matrix and x are image points in projective coordinates [

Two conics are corresponding when both of them are projections of the same conic in space (see

For a pair of corresponding conics C and C' we define their corresponding cones as

The coefficients _{j}

When this condition is satisfied, it guarantees that the two conics are projections of the same conic in the space. This conic correspondence condition has a relevant role in the proposed method to compute the orientation of the light emitter with respect to the camera, as will be shown in the following sections.

The recovery of 3D information from conic correspondences has been studied in [

The 3D location of the light emitter is required in order to generate the second image of the scene conic. We use a calibrated omnidirectional image where the light emitter is partially visible, helping to compute its relative 3D position and orientation with respect to the camera reference system.

From an omnidirectional image where the light emitter is observed, we can compute its translation with respect to the omnidirectional camera using the inverse sphere camera model given by

Let q be the image point associated to the ball center with its corresponding ray on the unitary sphere s = ^{−1}(q). This ray indicates the direction from the image center to the laser. To calculate the distance _{im}

In this section we explain two alternate methods to compute the laser orientation with respect to the camera. They are based on the detection of either only one endpoint of the laser or their two endpoints.

_{x}_{y}_{z}

These two angles (

The computation of angle

As we mentioned before the scene information is given by the location of the projection plane in the scene. This information is obtained from the pencil of quadric surfaces C(λ) (see _{1}, _{2}) and eigenvectors (_{1}, _{2}). There are two solutions for the plane where the conic lies. These planes are represented in Cartesian form _{i}

To determine which one is the correct solution we use the projection centers of the two views o and o', given by the kernel (nullspace) of their corresponding projection matrices P and P'

For non-transparent objects, the plane for which (o^{T}p_{i}^{T} p_{i}

To verify the validity of the proposed method, we perform experiments using simulated data and real images acquired with our omnidirectional structured light system with laser in hand. To measure the accuracy of the proposed approach we compute the distance from the camera to the projection plane and the orientation of the projection plane given by its normal.

In the first experiment we use the projection of a single conic in the scene. We tested different configurations of the system, varying the azimuth angles of the light emitter from −35° to 35° in intervals of five degrees with a constant elevation of zero degrees. The projection plane is located at one meter from the camera. We also add errors in the range [−5°, 5°] every degree to the angles

The results for the One-endpoint method and the Two-endpoint method are shown in

In the proposed flexible configuration of the structured light system it is possible to acquire multiple observations of the same projection plane by moving the laser in hand while the omnidirectional camera is static. In

These experiments are performed using our wearable flexible structured light system, which is composed of a catadioptric camera designed by Vstone [

As explained before our method to estimate the plane projection depth is based in acquiring multiple observations of the conic pattern projected on this plane while omnidirectional camera is static. In these experiments we use seven omnidirectional images of the projected conic pattern. Illumination conditions are adequate to extract at the same time the conic pattern and the light emitter with the sphere attached to it. We perform two experiments, one for each orientation method of the light emitter. In the first experiment the plane is located at 1.3 m and in the second one the plane is located at 1 m.

We use the HSI (Hue Saturation Intensity) space color since it is compatible with the vision psychology of human eyes [

Once the light pattern and the light emitter endpoints are extracted from the omnidirectional image we apply our method to obtain the depth information of the scene. We use the seven obtained solutions to compute a final one with a RANSAC algorithm adapted to planes as we have explained previously. The results of these experiments using the two orientation methods are shown in

We observe that this step along with the non-linear optimization step are the most time consuming tasks. In this paper we use Matlab but a proper implementation in C/C++, like the OpenCV library, would improve the efficiency of our approach.

In this paper, we present a new omnidirectional structured light system in flexible configuration which can be used as a personal assistance system. We use it to recover the scene structure. This system only requires a single omnidirectional image where the light pattern and the light emitter are present. From this image the position and orientation of the laser in the space are computed. Up to our knowledge this is the first structured light system in flexible configuration. Our approach has shown good results for simulated and real data. The orientation of the light emitter is better estimated when its two endpoints are visible, having a big impact on the computation of the depth scene. In future work, we expect to improve the image processing step in order to deal with more general illumination conditions.

This work was supported by the Spanish project DPI2012-31781.

The authors declare no conflict of interest.

Flexible structured light system. (

Projection of a 3D point to two image points in the sphere camera model.

Laser model. (

Depth information from conic correspondence using virtual images.

Methods to compute the orientation of the light emitter. (

Different configurations of the proposed structured light system. (

Simulation results using One-endpoint method with one conic projection. (

Simulation results using Two-endpoint method with one conic projection. (

Estimation of a plane located 1

Wearable flexible structured light system with light emitter in hand.

(

Scene depth information using simulated data and multiple azimuth observations of the conic pattern. A noise of 5° is added to the three angles representing the laser orientation and its projection in the image.

Ground Truth | 1m | (0,1,0) |

One-endpoint method | 1.03 m/2.87% | (−0.061, 0.995, 0.078)/5.74° |

Two-endpoint method | 0.97 m/2.78% | (−0.0294, 0.999, 0.009)/1.77° |

Distance estimated using our flexible structured light system.

Ground Truth | 1.3 m | 1.3 m |

Estimated distance | 1.17 m | 1.26 m |

Error | 9.58% | 3.14% |