^{1}

^{2}

^{*}

^{2}

^{1}

^{1}

^{2}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

Various models using radiometric approach have been proposed to solve the problem of estimating the distance between a camera and an infrared emitter diode (IRED). They depend directly on the radiant intensity of the emitter, set by the IRED bias current. As is known, this current presents a drift with temperature, which will be transferred to the distance estimation method. This paper proposes an alternative approach to remove temperature drift in the distance estimation method by eliminating the dependence on radiant intensity. The main aim was to use the relative accumulated energy together with other defined models, such as the zeroth-frequency component of the FFT of the IRED image and the standard deviation of pixel gray level intensities in the region of interest containing the IRED image. By using the abovementioned models, an expression free of IRED radiant intensity was obtained. Furthermore, the final model permitted simultaneous estimation of the distance between the IRED and the camera and the IRED orientation angle. The alternative presented in this paper gave a 3% maximum relative error over a range of distances up to 3 m.

The camera model used in computer vision involves a correspondence between real world 3-D coordinates and image sensor 2-D coordinates by modeling the projection of a 3-D space onto the image plane [

If a 3-D positioning system is implemented using one camera modeled by the projective model, this would imply relating the image coordinate with the world coordinate, obtaining an ill-posed mathematical problem. In this analysis, one of the three dimensions is lost. From a mathematical point of view, this implies that although the positioning system is capable of estimating a 2-D position of the subject, the distance between the camera and the subject cannot be calculated. In fact, the position in this case would be defined by a line passing through the optical center and the 3-D world coordinates of the subject.

The main problem in this case can thus be defined: how can the distance between the camera and a subject be estimated efficiently? That is, how can depth be estimated?

Mathematically, the typical projective model can be used to estimate depth by including additional constraints in the mathematical equation system. These additional constraints could be incorporated, for example, by using another camera to form a stereo vision system [

The radiometric model and the alternative distance measurement method proposed in [

In the image formation process, the camera accumulates incident light irradiance during exposure time. The accumulated irradiance is converted into an electrical signal that is sampled, quantized and coded to form pixel gray level intensity. By analyzing this process inversely, from pixel gray level intensity to the irradiance on the sensor surface, an inverse camera response function can be defined, and this constitutes the starting point for applying a radiometric model [

The function that relates pixel gray level intensities to light irradiance on the sensor surface is known as the camera radiometric response function [

However, in [

For example, if the positioning system is designed to estimate the 3-D position of a robot carrying an infrared emitter diode (IRED), then the IRED can be considered the point-source and the irradiance on the sensor surface (camera) will decrease with squared distance [

Images must be formed by the power emitted by the IRED. Background illumination must be eliminated using interference filters.

The IRED must be biased with a constant current to ensure a constant radiant intensity.

Focal length and aperture must be constant during the measurement process, as stated in projective models.

The camera and the IRED must be aligned. The orientation angles have not been considered yet.

The camera settings (such as gain, brightness, gamma) must be established in manual mode with a fixed value during the experiment. The goal is to disable any automatic process that could affect the output gray level.

Obviously, the power emitted by the IRED is affected by the azimuth and elevation angles. It can be modeled by the IRED emission pattern. As was stated in the conditions defined above, in a previous experiment, the camera was aligned with the IRED. However, in this paper the effect of the emission pattern of the IRED would be considered.

Another alternative to decrease the effect of the IRED emission pattern could be the use of an IRED source with an emission pattern conformed by optical devices. The idea is to ensure a constant radiant intensity on an interval of orientation angles. In this case, the effect of the IRED emission pattern could be obviated from the model definition. However, this alternative would be impractical, because a new limit to the practical implementation would be added to the proposed alternative.

Considering these conditions, the camera would capture images of the IRED. From the analysis of the captured images, the relative accumulated image irradiance (_{rrel}_{rrel}_{i}_{rrel}_{rrel}

Under the conditions stated above, _{rrel}_{0}), the camera exposure time (_{rrel}_{rrel}_{0},

There is a common singularity that could be interpreted as unconsidered parameter if the distance measuring alternative would be compared with typical pin-hole model. That is: the real position of the image sensor inside the camera.

The _{rrel}

Mathematically, to convert the real distance into the physical distance traveled by the incoming light, an offset displacement (

All automatic processes of the camera are disabled. The gain of the camera was fixed to the minimum value and the brightness and gamma were also fixed to a constant value during the whole implementation.

The modeling process used to obtain the function _{rrel}_{0}, _{rrel}_{0}, and distance between the IRED and the camera _{1} and _{2} were used to define a linear behavior of _{rrel}_{1} and _{2} were used to model the behavior of _{rrel}_{0}, and _{1} and _{2} were used to model the behavior of with ^{−2}.

Nevertheless, the behavior of _{rrel}_{(}_{θ}_{)}), because the camera and emitter were considered aligned [

After elimination of the parenthesis in the _{j}_{1}, _{2}, …, _{2} and _{(}_{θ}_{)}) and were estimated in a calibration process considering images captured with different _{0} (different values for IRED radiant intensities were obtained by changing the emitter bias currents) and different

The acquired data was used to form a system of equations defined by:
_{rrel} is the vector of relative accumulated irradiance measured in image _{0} and _{1}, …, _{8}]^{t}

Therefore, from the ^{+} is the pseudo-inverse matrix of

Once values for the model coefficients have been calculated, the distance between the camera and the IRED can be estimated directly from

The results of the measurement process suggest that the relative accumulated irradiance _{rrel}

The procedure described in Section 1.1 was generalized in order to use other parameters extracted from the IRED image by considering only the pixel gray level intensities.

Specifically, the additional proposed parameters were the zeroth-frequency component of the FFT of the IRED image [

The zeroth-frequency component of the FFT of the IRED image represents the average gray level intensity. Thus, using the FFT to obtain the average gray level is an ineffective procedure. However, strategically the FFT of the IRED image will be used to obtain new parameters to be related to radiometric magnitudes including the distance between the camera and the IRED.

Nevertheless, in this paper only the zeroth-frequency component of the FFT of the image of the IRED is used for distance estimation.

In [_{rrel}_{0} and _{rrel}

In other words, the _{rrel}_{0} were measured experimentally. Finally, a linear function was considered to model the behaviors of _{0} and ^{−2}, respectively.

Thus, considering a linear function to model the behaviors of _{0} and ^{−2}, the expression for

In the case of the standard deviation (Σ), the behavior of Σ with _{0} was similar to _{rrel}_{rrel}

In _{rrel}^{−2}. In the case of Σ, as is shown in

Finally, the additional parameters were tested to estimate the distance between one camera and one IRED, considering that the camera was aligned with the IRED. In both cases, the relative error in distance estimation was lower than 3% in a range of distances from 4 to 8 m.

Therefore, to describe the problem defined in Section 1 related to the estimation of the distance between one camera and one IRED onboard a robot, three independent alternatives have been proposed, which are summarized in

Nevertheless, some questions remain unanswered, for example:

What will happen when the camera and emitter are not aligned?

How can we ensure that the estimated distance is kept constant under different conditions, e.g., for different temperatures (knowing that the IRED radiant intensity depends on temperature)?

Can IRED radiant intensity be eliminated or estimated using the defined models?

These questions will be answered in this paper, following the principal objective, which is to propose a distance measurement methodology that provides a better performance in real environments by integrating the distance estimation alternatives summarized in

The following sections describe the solution to the problems stated above. First, in Section 2, the effect of the IRED orientation angle is incorporated into the models summarized in _{0}-free distance estimation method and the corresponding results. Finally, in Section 6, the conclusions and future trends are presented.

In the measurement alternatives proposed in [

As can be seen in

The effect of the incidence angle (

From a radiometric point of view, the main angle to be considered in the distance estimation alternatives is the

Although the effect of the IRED orientation angle (

Starting with the relative accumulated image irradiance, _{rrel}

_{rrel}

Mathematically, to take into account the _{1}ϱ + _{2}) states that _{rrel}

Once the parentheses have been eliminated, the relative accumulated image irradiance would yield:
_{0} and different IRED orientation angles.

The values for _{j}_{rrel}_{1}, …, _{16}, ^{t}_{0} and

The result of this fitting process is shown in

Once the coefficients vector _{rrel}

In [_{rrel}

As was demonstrated in [

The methodology employed to include the orientation angle in the _{rrel}

As in the case of the _{rrel}_{rrel}

The other principal problem to be analyzed in this paper is the variation in IRED characteristics under different conditions, for example, temperature. As stated earlier, radiant intensity can be set by the IRED bias current. As the IRED is a semiconductor diode, temperature variation produces a drift in bias current that is transferred to radiant intensity [

A priori, an estimation of IRED radiant intensity would be a practical solution, but knowing the _{0} would not give us any useful information related to the positioning system. The ideal solution would be to eliminate, at least in the mathematical procedure, the effect of IRED radiant intensity on the distance estimation method.

Three equations were obtained mathematically, shown in _{0}, therefore, eliminating it would be the best solution.

First, the equations summarized in

The differential method analyzes two images captured with different exposure times _{m}_{r}_{rrel}_{m}_{rrel}_{r}_{r}

Analogously, for the

Finally, for Σ:

Subsequently, from _{0} can be written as:

To simplify the mathematical expressions, this notation was used:

The idea was to substitute _{0} in the other equations, which would produce two _{0}-free equations.

Thus, substituting _{0} from _{rrel}_{0}-free equation can be written as:

Eliminating the parentheses in _{j}_{E}

The function ϱ_{E}_{rrel}_{F}

In the case of the standard deviation of pixel gray level intensities in the ROI containing the IRED image, the _{0}-free expression would yield:
_{j}

Formally, the model fitting process for _{n}_{1}, …, _{1}7, _{E}_{E}_{rrel}

_{rrel}_{1}, …, _{22}, _{Σ}] represent the vectors that are unknown, with an additional unknown, as was considered in _{n}_{Σ} is the error between measured and theoretical Σ.

In _{E}_{Σ} respectively, using the Levenberg-Marquardt algorithm.

The results for the model fitting process are shown in

In _{rrel}_{rrel}

The results shown in these figures demonstrate that the defined models can be considered valid to mathematically characterize the Δ_{rrel}_{rrel}

The measurement alternative is formed by the _{0}.

Once the models coefficients q and p have been calculated, for each model, an expression to estimate the distance can be defined. For example, for the Δ_{rrel}_{1, 2, 3} coefficients were defined as:

Analogously, for the ΔΣ model:

The coefficients _{1, 2, 3} and _{1, 2, 3, 4} in _{rrel}

From the root of each of these two equations, the

Mathematically,

Each condition was formed by the combination of the values of _{0}, _{1,2,3} and _{1,2,3,4} coefficients can be obtained, so

Although the data used in the calibration process will be formally defined in

Using the fact that two _{0}-free equations have been defined, another additional unknown was considered to form a system of two equations with two unknowns. Thus, the goal was to calculate the distance between the camera and the IRED as well as the IRED orientation angle simultaneously. Mathematically, this can defined as:
^{t}_{Errel}_{∑}

Evidently, using the system of equations defined in

The final alternative proposed in this paper uses the optimization stated in

_{r}

_{n}

^{(n)}and the IRED orientation angle

^{(}

^{n}

^{)}. 1: Estimating parameters from image:

^{(}

^{n}

^{)}=

_{n}

_{r}

^{(}

^{n}

^{)}and

^{(}

^{n}

^{)}from

_{1}

^{3}+ (

_{2}−

_{1})

^{2}+ (

_{3}−

_{2})

_{4}−

_{3}) = 0 9: Optimization method to calculate simultaneously

^{(}

^{n}

^{)}and

^{(}

^{n}

^{)}10:

Another interesting aspect addressed in this paper, which can be inferred from Algorithm 1, is that different exposure times were considered to test the alternative for estimating the distance and the IRED orientation angle. This is related to the fact analyzed in [

The algorithm for estimating the distance and the IRED orientation angle uses several images captured with different exposure times _{n}_{r}^{(}^{n}^{)},
^{(}^{n}^{)} and ΔΣ^{(}^{n}^{)}. Using the images captured with _{n}^{(}^{n}^{)} were estimated from the estimated ellipse, as proposed in [

The main goal of the experimental tests was to demonstrate that the alternatives defined in

The tests were carried out using the measurement station shown in

To ensure real distances and IRED orientation, an accurate ad-hoc measurement station was built, which was controlled from a PC by serial port communication. The measurement station was composed of a pan-tilt platform onto which the IRED was mounted and which permitted the IRED orientation angles to be changed with a precision of 0.01 degrees. As can be seen in

It was necessary to calibrate the distance estimation alternative before the distance estimation process could begin. In other words, the model coefficient values had to be calculated before initiating the distance estimation process.

The calibration process was described briefly in Section 3, to demonstrate the validity of the defined model, as shown in

By combining the conditions summarized in

Once the model coefficients had been calculated, an experiment to estimate the distance between the IRED and the camera was carried out. In this experiment, a reference exposure time _{r}_{n}_{n}_{n}_{n}_{0} in the distance estimation process, considering all possible conditions. Furthermore, it should be noted that in this experiment, a new value for _{0} was used, which had not been considered in the calibration process.

In

To clarify the relationship between the IRED orientation angles and different exposure times, the data plotted in

The dependency of distance estimation accuracy on the differences in exposure times was analyzed in [

Thus, by analyzing

Using the optimum exposure time difference, another experiment was carried out to test the alternative for estimating the distance between the IRED and the camera and the IRED orientation angle simultaneously, as described in Algorithm 1. This experiment considered distances of 1,700, 2,300 and 2,900 mm between the camera and the IRED, and 10, 15 and 20 degrees for the IRED orientation. Besides, unlike the previous experiment, the IRED was biased with a random bias current with a mean of 500 mA and a dispersion of 50 mA. A random value for the IRED bias current was generated for each distance–orientation pair and for each

In

There is an interesting aspect that merits emphasis, which is related to the accuracy of distance estimations. Accuracy can be obtained from

However, as is shown in

Although estimation of the IRED orientation angle was not as accurate as distance estimation, the results obtained were qualitatively accurate, especially for estimated distance. Furthermore, the worst estimations for the IRED orientation angles were obtained for lower than 10 degrees, as shown in

As a final experiment, the consistency of the distance measurement alternative was tested. The measurement alternative summarized in Algorithm 1 was included in a loop of 100 iterations. In each iteration, two images were captured, one with _{r}_{n}

From

As a final comment, one aspect that requires further study is the selection of the magnitudes included in the camera-IRED system, especially the IRED bias current and the camera exposure times.

The values used in this paper were selected based on empirical results; therefore, the automatic selection of these values would improve the calibration and measurement process.

In this paper, an IRED radiant-intensity-free model has been proposed to decouple the intensity radiated by the IRED from the distance estimation method using only the information extracted from pixel gray level intensities and a radiometric analysis of the image formation process in the camera.

The camera-to-emitter distance estimation alternatives proposed in [_{0} parameter must be decoupled from the camera-to-IRED distance estimation alternatives.

The previously proposed alternatives to estimate the distance between an IRED and a camera have also considered the camera as aligned with the IRED, which reduces the possibilities of future implementations. However, in this paper, a study of the effect of the IRED orientation angle was performed and this effect was modeled by a Gaussian function.

The dispersion of the Gaussian function used to model the effect of the IRED emission pattern was included as an additional unknown in the calibration process for each individual model defined for camera-to-IRED distance estimation. In the model validation, it has been demonstrated that a maximum error of 4% was obtained for the three individual models proposed in [

Once the IRED orientation angle effect had been described mathematically, a method to estimate the IRED orientation angle was implemented by using a circular IRED through the estimation of the ellipse formed by the projection of the circular IRED on the image plane. This method provided an IRED orientation angle with 2 degrees of maximum error.

However, the main contribution of this paper is the mathematical approach to characterizing the IRED—camera system independently of IRED radiant intensity, by using the models defined in [

The procedure to eliminate the effect of IRED radiant intensity uses the _{0}, which is substituted in the _{rrel}_{0}-free expressions were obtained.

From the two _{0}-free expressions, the distance is the main unknown; however, an optimization scheme was defined to calculate the distance between the IRED and the camera and the IRED orientation angle simultaneously.

The _{0}-free alternative was tested for distance estimation in the range from 1,500 to 2,900 mm with three different IRED bias currents, for different IRED orientation angles and different exposure times. The results of distance estimations were very similar for all the conditions considered in this experiment. Furthermore, the experimental results obtained permitted the selection of optimum exposure time differences where distance estimations were more accurate.

Once the optimum exposure times had been selected, another experiment was carried out. In this case, the IRED was biased using a random bias current, in order to determine the independence of the distance estimation method from variations in IRED radiant intensity. Considering the random bias current, the experimental results demonstrated that the proposed alternative provided a maximum error of 2% in distance estimation. However, the IRED orientation angle estimations were not as accurate as the distance estimations.

As a further validation experiment, the consistency of the distance estimation method was tested for three different distance values over 100 repetitions. The results over the 100 repetitions showed that the maximum error of the average distance estimation was lower than 3% and the maximum dispersion was lower than 15 mm.

Finally, some aspects require further research. For example, throughout the modeling process, the IRED radiant intensity and the camera exposure times were selected empirically. Thus, a quality index based on on-line analysis of the IRED images acquired by the camera, to facilitate the assignment of values for these magnitudes, must be defined in order to increase the efficiency of the modeling and measurement processes, respectively.

This research was funded by the Spanish Ministry of Science and Technology sponsored project ESPIRA DPI2009-10143.

The authors declare no conflict of interest.

The standard deviation as a function of ^{−2}. Using this result, a quadratic expression for the behavior of Σ with ^{−2} was assumed. (_{p}_{p}

Simplified diagram of the problem of the distance estimation between a camera (Receiver Surface) and a IRED (Source).

Measured behavior of _{rrel}

Relative accumulated image irradiance model fitting taking into account the IRED orientation angle.

Result of IRED orientation angle estimation using the method proposed by [

Other additional parameters extracted from IRED images and their relationship with the function used for IRED emission pattern modeling, (

Results for model fitting considering the _{n}_{rrel}_{rrel}

Function |_{1}^{3} + (_{2} − _{1}) ^{2} + (_{3} − _{2}) _{4} − _{3})| using the data employed in the calibration process. (

Measurement station used in practical validation of the alternative to measure the distance between the camera and the IRED. (

Results of the distance estimation process considering all available Δ

Results of distance estimation process as a function of Δ

Distance estimation as a function of Δ

Results of the distance and IRED orientation angle estimation using random bias currents in the IRED. (

Consistency of the distance estimation alternative proposed in this paper. (

Summary of parameter behaviors used to define the proposed models for camera-to-IRED distance estimation method. The final expression for each model can be obtained by the product of functions defined in each column. For example Σ = (_{Σ1}_{Σ2}) × (_{Σ1}_{0} + _{Σ2}) × (_{Σ1}^{2} + _{Σ2}_{Σ3}) × (_{Σ1}_{Σ2}).

_{0}) |
^{−2}) |
|||
---|---|---|---|---|

_{rrel} |
Linear | Linear | Linear | Linear with a function of the emitter pattern |

Linear | Linear | Linear | Linear with a function of the emitter pattern | |

Σ | Linear | Linear | Quadratic | Linear with a function of the emitter pattern |

Summary of parameters extracted from the IRED images, including the effect of the IRED orientation angle. ^{−2} and
_{rrel}F

_{rrel} |
(_{E1} _{E2}) × (_{E1} _{0} + _{E2}) × (_{E1} _{E2}) × (_{E1} ϱ_{E}(_{E2}) |

(_{F1} _{F2}) × (_{F1} _{0} + _{F2}) × (_{F1} _{F2}) × (_{F1} _{F}_{F2}) | |

Σ | (_{Σ1} _{Σ2}) × (_{Σ1} _{0} + _{Σ2}) × (_{Σ1} ^{2} + _{Σ2} _{Σ3}) × (_{Σ1} _{Σ} (_{Σ2}) |

Calibration Data.

Exposure time [ms] | _{r}_{n} |

IRED bias current [mA] | 475 and 500 |

Distances [mm] | 1500, 2000, 2500 and 3000 |

IRED orientation angles [degrees] | 0, 5, 10, 15, 20, 25, 30 and 35 |