Next Article in Journal
Editorial: Nanotechnological Advances in Biosensors
Next Article in Special Issue
A Robust Head Tracking System Based on Monocular Vision and Planar Templates
Previous Article in Journal
Resistive Oxygen Sensor Using Ceria-Zirconia Sensor Material and Ceria-Yttria Temperature Compensating Material for Lean-Burn Engine
Previous Article in Special Issue
A Wireless Sensor Network Deployment for Rural and Forest Fire Detection and Verification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sensor for Distance Measurement Using Pixel Grey-Level Information

1
Electronics Department, University of Alcalá, Polytechnic School, University Campus, Alcalá de Henares, Madrid 28871, Spain
2
Telecommunications Department, Oriente University, Av. de las Américas, SN, Santiago de Cuba 90900, Cuba
*
Author to whom correspondence should be addressed.
Sensors 2009, 9(11), 8896-8906; https://doi.org/10.3390/s91108896
Submission received: 30 September 2009 / Revised: 29 October 2009 / Accepted: 4 November 2009 / Published: 6 November 2009
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain)

Abstract

:
An alternative method for distance measurement is presented, based on a radiometric approach to the image formation process. The proposed methodology uses images from an infrared emitting diode (IRED) to estimate the distance between the camera and the IRED. Camera output grey-level intensities are a function of the accumulated image irradiance, which is also related by inverse distance square law to the distance between the camera and the IRED. Analyzing camera-IRED distance, magnitudes that affected image grey-level intensities, and therefore accumulated image irradiance, were integrated into a differential model which was calibrated and used for distance estimation over a 200 to 600 cm range. In a preliminary model, the camera and the emitter were aligned.

1. Introduction

Distance estimation using vision sensors is an important aspect of robotics since many robot positioning algorithms use distance to calculate the robot's position as the basis for more complicated tasks.
Traditionally, distance measuring in robotics has been conducted by sonar (US) and infrared (IR) sensing. Several methods based on the line-of-sight (LOS) and echo/reflection models have also been used. The LOS model places the emitter and detector in different locations, and signals travel from emitter to detector. The Reflection model links emitter and detector physically (in the same place) and signals are reflected off an object or wall, following a round-trip-path.
In the Reflection model, the viability of IR as an accurate means of measuring distance depends on extensive prior knowledge of the surface (scattering, reflection, and absorption). [1] details a method for determining surface properties, and subsequently calculating the distance to the surface and the relative orientation of the surface in an unknown environment using previously acquired sensory data; the developed sensor provides accurate range measurements when used in conjunction with other sensing modalities. In [2], low-cost infrared emitters and detectors are used for the recognition of surfaces with different properties in a location-invariant manner. In order to resolve the dependency of intensity readings on the location and properties of the surface, the use of angular intensity scans and an algorithm to process them was proposed. In [3], an IR sensor based on the light intensity back-scattered from objects and capable of measuring distances, and the sensor model, are described. In all cases [1-3], sensors for short distances are evaluated.
Vision devices based on a geometrical model have been used for many positioning tasks. However, most vision positioning algorithms are based on geometrical imaging models, where 3D to 2D projection constitutes the main mathematical tool for analysis [1-6]. With vision devices, the LOS model for signal transmission and distance measurement is used.
With geometrical models, a single camera and interest point can only estimate a 2D position, as projection based models cannot provide depth information. Nevertheless, depth can be calculated if additional information is included in the model. This entails the use of two vision sensors or some kind of active device [1,2,4,5,7-13].
To date, artificial vision has been one of most widely used positioning techniques since it gives accurate results. However, geometric modeling is more normally used in order to obtain distance to objects [14].
In intelligent spaces, smart living, etc., where a certain number of cameras are already installed, a simple method based on grey levels can be developed to determine the depth. The cameras already installed in the environment are used for performing other tasks necessary in smart living and intelligent spaces. For example, if a mobile robot carries an IRED, depth can be estimated from the cameras' pixel values, because there is a relationship between pixel grey-level intensity and the quantity of light that falls on the image sensor. Also, received light is related by inverse distance square law to the distance between the camera and the IRED.
When an image is captured by a digital camera, it provides a relative measure of the distribution of light within the scene. Thus, pixel grey-level intensities are a function of sensor surface irradiance accumulated during the exposition time.
The function that relates accumulated image irradiance to output grey-level intensities is known as the Radiometrical Camera Response Function (RCRF) [15-18].
In [16], the properties shared by all camera responses were analyzed. This enabled the constraints that any response function must satisfy, and the theoretical space of all possible camera responses, to be determined. Using databases, the authors concluded that real-world responses occupy a small part of the theoretical space of all possible responses. In addition, they developed a low-parameter empirical model of response.
In most cases, an inverse-RCRF is required in order to obtain a direct relationship between pixel grey-level intensities and accumulated image irradiance by a Radiometric Camera Calibration Process (RCCP) [16,17].
Furthermore, if the camera takes an image of an IRED and this can be isolated, a point source model for the IRED can be used to estimate the irradiance on the surface of the camera lens using inverse distance square law [19]. The camera lens captures irradiance distribution on the surface of the sensor and also accumulates sensor irradiance during the exposition time. Finally, the pixel grey-level intensity can be related to the accumulated image irradiance by a RCCP, lens irradiance can be related to the image's sensor irradiance by lens modeling, and lens irradiance can be related to the distance between the camera and the IRED by inverse distance square law. Thus, a relationship with pixel grey-level intensity can be defined which includes the distance between the camera and the IRED.
Previous papers have been presented in the field of IR, in which the authors developed computer programs and methods. In [20], a reducing-difference method for non-polarized IR spectra was described, where the IR signal of a five component mixture was reduced stepwise by subtracting the IR signal of other components until total elimination of the non-desired signals was achieved.
The aim of the present study was to use a geometrical model for 2-D positioning, which would provide coordinates on a plane parallel to a mobile robot (for example), and then to use a radiometrical model in order to obtain the distance between the mobile robot and the plane. For our purposes, the LOS model was used in order to determine the distance between emitter (IR) and detector (CMOS camera).

2. The IRED-Camera Set

The method using a single IR point to develop an effective and practical method is based on the assumption that the final application will be carried out in settings such as intelligent spaces, smart living spaces, etc., where a certain number of cameras are already installed. The cameras already installed in the environment are used for performing other tasks necessary in these smart living and intelligent spaces.
One possibility would be to use a photodiode. However, these devices provide an instantaneous response to the signals received, and given the distances involved in these applications (several meters), and working with an LOS link (the best alternative available), these can be less than pW. The low intensity of the received signal can, therefore, impede accurate, or even valid, measuring. Since distance estimation and robot position determination can involve measurements on a ms. timescale, considered as real time, the signal must be integrated into a determined time interval. The use of a photodiode would imply the need to design signal conditioning, integration and digital circuits, etc. All of these, in addition to a webcam, are already available in the proposed method; consequently, the design is simpler, implementation is quicker and final costs are lower, as a variety of existing models can be selected.
A further reason for using a camera is that by applying a differential method for measuring, and given that by using cameras the method can be selected digitally from a computer, automation and control, speed and safety of data acquisition is facilitated since two consecutive measurements are taken with different integration times.
Later, when the method for distance estimation using mathematical algorithms has been fully developed, it will be possible to combine this data with data obtained from the geometric calibration of cameras in order to improve generation of the variables and parameters involved in position and location of the device incorporating the IRED.
In order to define a model for the IRED-Camera set, the following aspects were established:
  • Estimation of accumulated image irradiance. When inverse RCRF is used, a measure of the accumulated image irradiance can be obtained. A sample of energy accumulated in the camera is shown in Figure 1.
  • The relationship between accumulated image irradiance and lens irradiance. This can be obtained using the camera's optical system model and also includes the camera exposition time. A linear behaviour is assumed for the camera's optical system model.
  • Behaviour of lens irradiance with the distance between the camera and the emitter. For the IRED Camera set a point source model can be used; in this case, lens irradiance can be estimated using inverse distance square law.
  • Image irradiance must be due to IRED light energy. Therefore, background illumination must be suppressed. An ideal implementation would be to test the algorithm in a dark room; however we used an interference filter to select a narrow wavelength band centered on the typical emitter wavelength in order to ensure that images were created only by IRED energy.
From a general point of view, accumulated image irradiance is a function of camera parameters and radiometric magnitudes which quantitatively affect the image formation process.
The fact that the emitter transmits up to 120°, or at a different angle, only influences the angle at which the detector can be situated in order to receive the emitter's signal, and does not affect the size of the image formed.
As regards image size, according to the laws of optical magnification this is only influenced by the size of the object and the distance at which it is located. In our case, as the diode size is both constant and very small, the image appears reduced. Nevertheless, as can be seen in Figure 1, the point of light image increases in size as the distance of image acquisition decreases.

2.1. Model definition

Reference [21] was taken as the starting point, and the inverse RCRF “g” was estimated using the method proposed by Grossberg and Nayar [16]. However, a new practical measure for accumulated image irradiance Er was defined thus:
E r = 1 A i = 1 A g ( M i )
where Mi is the normalized grey-level intensity for the pixel i, 1≤ iA, and where A is the total number of pixels in an image's region-of-interest, containing the spot produced by the IRED. In practice, a ROI of 100 pixels × 100 pixels was selected. Since the camera and the IRED were aligned, the same ROI was selected for all the images used.
To define the differential model, the magnitudes and relationships affecting Er which were defined in [21], were also used here.
A differential method was selected because a measurement taken with a specific exposition time will include various errors due to camera irradiance, external illumination factors (spotlights, the sun, etc.), or the effect of temperature on the sensor, for example. The differential method enabled us to isolate the measurement from these effects.
Moreover, both the sensor and method are economic, since cameras are already installed in the application environment. Furthermore, the method is simple to launch and installation of the system is easy. The system is non-invasive and safe to operate, and the sensorial system is complementary to other methods, facilitating ease of data fusion

3. Differential Model

Assuming that the camera and the emitter are aligned initially, there are three magnitudes that affect the accumulated image irradiance for the camera-IRED set the camera exposition time, the IRED radiant intensity and the distance between the IRED and the camera [21].
In addition, and as in [21], the behavior of Er with each of the magnitudes affecting the IRED image formation process was measured by fixing values for the other magnitudes. For example, in order to discover how Er behaves with camera exposition time, images were captured using fixed values for the emitter radiant intensity and distance whilst varying the exposition time. A similar methodology was used to obtain all Er's behaviors.
As in [21], the same Er behaviours with defined magnitudes were obtained. Therefore, all measured behaviors could be integrated into a unique expression as follows:
E r = ( τ 1 t + τ 2 ) × ( ρ 1 P e + ρ 2 ) × ( δ 1 1 d 2 + δ 2 )
where τ1, τ2, δ1, δ2, ρ1, and ρ2 are the model parameters, and t, Pe, and d are the exposition time, the IRED radiant intensity and the distance between the camera and the emitter, respectively.
From (2), this can be re-written as:
E r = k 1 P e t d 2 + k 2 P e d 2 + k 3 t d 2 + k 4 1 d 2 + k 5 P e t + k 6 t + k 7 P e + k 8
where kj, 1≤ j8, are model parameters that can be related to τ1, τ2, δ1, δ2, ρ1, and ρ2. The expression (3) has been obtained by suppressing the parentheses in (2).
If images captured with different camera exposition times are analyzed, then (3) can be written by considering the differences of accumulated image irradiances as follows:
E r t n E r t r = k 1 P e d 2 ( t n t r ) + k 3 t n t r d 2 + k 5 P e ( t n t r ) + k 6 ( t n t r )
where tn and tr are the different camera exposition times, tr is the fixed reference exposition time and tn represents different exposition time values.
Expression (4) was used as the proposed model to characterize the IRED-Camera set Therefore, values for k in (4) must be obtained in a calibration process.
Once k parameters have been obtained, the expression (4) can be solved for the distance estimation:
d n = k 1 P e ( t n t r ) + k 3 ( t n t r ) E r t n E r t r k 5 P e ( t n t r ) k 6 ( t n t r )
The aim was to use the proposed differential model to estimate the distance between the camera and the IRED, where the model analyzes images of the IRED captured with different exposition times, and also assumes that distance and emitter radiant intensity are constant during the image capturing process.
The method described in this paper, based on differential image processing, presents several advantages, innovations and benefits related to previous studies, which can be summarized as follows:
  • the development of a sensor for distance measuring and pose detection based on grey level intensity of the images;
  • the development of a method for obtaining the distance between two points (IR-camera) using a differential;
  • the sensor and method are economic, since cameras are already in the application environment (requiring only one IRED for each object/mobile to be detected);
  • the method is simple to launch;
  • installation of the system is easy;
  • the system is non-invasive and safe;
  • the sensorial system is complementary to other methods, facilitating ease of data fusion; etc

4. Practical Implementation

For practical reasons, a Basler camera was used, with a SFH42XX High Powered IRED with 950 nm emission peak; thus an interference filter centered at 950 nm and with a bandwidth of 40 nm was added to the camera, which improved the signal/noise ratio since it eliminated background illumination: all visible light, and infrared light up to 930 nm and over 970 nm.
In order to use (5) for distance estimation, ki parameters must be estimated in a calibration process. This process was implemented by analyzing an image sequence of 4 different distances (d1 = 400 cm; d2 = 350 cm; d3 = 300 cm; and d4 = 250 cm), 10 different exposition time differences, assuming tr = 2 ms as the reference exposition time (the reference exposition time selected was sufficiently low to eliminate possible offset illumination and dark current effects) and tn = {3; 4; 5; : : : ; 12} ms and 3 different IRED radiant intensities, which were selected by varying the diode's polarization current. A representative result for the calibration process is shown in Figure 2.
In Figure 2, modeled and measured differences of accumulated image irradiances are shown versus exposition time differences. These values were extracted from images used in the calibration process, specifically for the distance d4 = 250 cm, where three different emitter radiant intensities and 10 different exposition time differences were considered. In addition, Figure 2 shows the effectiveness of the model calibration process. Once the calibration process has been carried out, experiments for distance estimation can be conducted.
Two experiments were performed to test the validity of the differential model for distance estimation. In the first experiment, the 200 cm to 380 cm range was considered whilst the second considered the 400 cm to 600 cm range. The second experiment showed greater error since the distances were greater and were also beyond the range for which the sensor had been calibrated (that is, the range used in the first experiment). Nevertheless, the second experiment shows that even so, sufficiently accurate measurements can still be taken. For both experiments, distance was increased stepwise by 20 cm and the camera was aligned with the IRED.
In addition, to improve the efficiency of the methodology in practical applications, four images were analysed to estimate distance. The first image was captured with a tr = 2 ms of exposition time and the others were captured with t1 = 9 ms, t2 = 10 ms and t3 = 11 ms respectively, thus obtaining three distance estimations. The final distance estimation was the mean value of these distance estimations. The IRED radiant intensity was fixed at Pe2 corresponding to a diode polarization current of 5 mA.

5. Results

Experiments were carried out using the method described above. The equipment used is indicated in Section 4. Once images from the optimal exposition time range had been selected, the deferential method for distance estimation was applied. The distance estimation results for each difference in exposition time, corresponding to the first experiment, are shown in Figure 3.
The final distance measurement is shown in Table 1.
In the first distance range, errors in distance estimation using the differential model are less than 8 cm., representing a relative error of less than a 2.5%. The second experiment considered longer distances, and results are given in Table 2.
In this case, the differential model was less accurate than in the first experiment.

6. Conclusions

An alternative method based on a radiometrical approach to the image formation process has been described for estimating the distance between a camera and an IRED. This method estimates the inverse RCRF in order to obtain a measure of the accumulated image irradiance, and shows that the accumulated image irradiance depends linearly on the emitter radiant intensity, the camera exposition time and the inverse square distance between the IRED and the camera. These behaviors are incorporated into a model, which can be re-written in a differential form.
The differential model has four parameters that must be estimated by means of a calibration process. Once the model's parameters have been calculated, the model's expression can be solved for distance estimation. Two distance ranges were considered for model validation. In the first range, errors were less than 8 cm. However, in the second experiment the errors were higher than in the first.
In conclusion, the proposed differential model represents an alternative method for estimating the distance between an IRED and a camera through analysis of image grey-level intensities.

Acknowledgments

This study was made possible thanks to the SILPAR II DPI2006-05835 projects sponsored by the DGI of the Spanish Ministry of Education at the University of Alcalá. We would also like to thank the Spanish Agency for International Development Cooperation (AECID), under the aegis of the Ministry of Foreign Affairs and Cooperation (MAEC).

References

  1. Novotny, P.M.; Ferrier, N.J. Using infrared sensors and the Phong illumination model to measure distances. Proceedings of International Conferences on Robotics and Automation, Detroit, MI, USA, May 10–15, 1999; Vol. 2. pp. 1644–1649.
  2. Barshan, B.; Aytac, T. Position-invariant surface recognition and localization using infrared sensors. Opt. Eng. 2003, 42, 3589–3594. [Google Scholar]
  3. Benet, G.; Blanes, F.; Simo, J.E.; Pérez, P. Using Infrared sensors for distance measurement in mobile robots. Rob. Auton. Syst. 2002, 40, 255–266. [Google Scholar]
  4. Faugeras, O.D. Three-Dimensional Computer Vision; MIT: Cambridge, MA, USA, 1993. [Google Scholar]
  5. Fernández, I.; Mazo, M; Lázaro, J.L.; Martin, P.; García, S. Local positioning system (LPS) for indoor environments using a camera array. Proceedings of the 11th International Conference on Advanced Robotics, June 30–July 3, 2003; University of Coimbra: Portugal; pp. 613–618.
  6. Ito, M. Robot vision modeling — camera modelling and camera calibration. Adv. Rob. 1991, 5, 321–335. [Google Scholar]
  7. Adiv, G. Determining three-dimensional motion and structure from optical flow generated by several moving objects. IEEE Trans. Pattern Anal. Mach. Intell. 1985, 7, 384–401. [Google Scholar]
  8. Fujiyoshi, H.; Shimizu, S.; Nishi, T.; Nagasaka, Y.; Takahashi, T. Fast 3D position measurement with two unsynchronized cameras. Proceedings of 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation, Kobe, Japan, Jul 16–20, 2003; pp. 1239–1244.
  9. Heikkilä, J. Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1066–1077. [Google Scholar]
  10. Luna-Vázquez, C.A. Medida de la posición 3-D de los cables de contacto que alimentan a los trenes de tracción eléctrica mediante vision. Ph.D. dissertation, Universidad de Alcalá, Madrid, Spain, 2006. [Google Scholar]
  11. Lázaro, J.L.; Gardel, A.; Mazo, M.; Mataix, C.; García, J.C. Guidance of autonomous vehicles by means of structured light. Proceedings of IFAC Workshop on Intelligent Components for Vehicles, Seville, Spain, June 6, 1997.
  12. Grosso, E.; Sandini, G.; Tistarelli, M. 3-D object recognition using stereo and motion. IEEE Trans. Syst. Man Cybern. 1989, 19, 1465–1476. [Google Scholar]
  13. Lázaro, J.L. Modelado de entornos mediante infrarrojos. Aplicación al guiado de robots móviles. Ph.D. dissertation, Universidad de Alcalá, Madrid, Spain, 1998. [Google Scholar]
  14. Lázaro, J.L.; Gardel, A.; Mazo, M.; Mataix, C.; García, J.C.; Mateos, R. Mobile robot with wide capture active laser sensor and environment definition. J. Intell. Rob. Syst. 2001, 30, 227–248. [Google Scholar]
  15. Lázaro, J.L.; Mazo, M.; Mataix, C. 3-D environments recognition using structured light and active triangulation for the guidance of autonomous vehicles. Proceedings of ASEE/IEEE Frontiers in Education Conference, Madison, WI, USA, June, 1997. No. A-4.
  16. Lázaro, J.L.; Mataix, C.; Gardel, A.; Mazo, M.; García, J.C. 3-D vision system using structured light and triangulation to create maps of distances for guiding autonomous vehicles. Proceedings of the International Conference on Control and Industrial Systems, La Habana, Cuba; 2000. [Google Scholar]
  17. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd Ed. ed; Press Syndicate of the University of Cambridge: Cambridge, UK, 2003. [Google Scholar]
  18. Mitsunaga, T.; Nayar, S.K. Radiometric self calibration. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, June, 1999; pp. 374–380.
  19. Grossberg, M.D.; Nayar, K. Modeling the space of camera response functions. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1272–1282. [Google Scholar]
  20. Grossberg, M.D.; Nayar, S.K. Determining the camera response from images: What is knowable? IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1455–1467. [Google Scholar]
  21. Reinhard, E.; Ward, G.; Pattanaik, S.; Debevec, P. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting; Morgan Kaufmann: San Francisco, CA, USA, 2005. [Google Scholar]
  22. McCluney, W.R. Introduction to Radiometry and Photometry; Artech House: Norwood, MA, USA, 1994. [Google Scholar]
  23. Ivanova, B.B.; Tsalev, D.L.; Arnaudov, M.G. Validation of reducing-difference procedure for the interpretation of non-polarized infrared spectra of n-component solid mixtures. Talanta 2006, 69, 822–828. [Google Scholar]
  24. Cano-Garcia, A.; Lázaro, J.L.; Fernandez, P.; Esteban, O.; Luna, C.A. A preliminary model for a distance sensor, using a radiometric point of view. Sens. Lett. 2009, 7, 17–23. [Google Scholar]
Figure 1. Representative samples of images used in the emitter-to-camera distance characterization.
Figure 1. Representative samples of images used in the emitter-to-camera distance characterization.
Sensors 09 08896f1
Figure 2. Representative results for the differential model calibration process. Three different emitter radiant intensities were considered by changing the diode polarization current.
Figure 2. Representative results for the differential model calibration process. Three different emitter radiant intensities were considered by changing the diode polarization current.
Sensors 09 08896f2
Figure 3. Representative distance estimations and absolute error for Experiment 1.
Figure 3. Representative distance estimations and absolute error for Experiment 1.
Sensors 09 08896f3
Table 1. Final distance estimation result for Experiment 1.
Table 1. Final distance estimation result for Experiment 1.
Real Dist. (cm)Est. Dist. (cm)Abs. Err. (cm)Relat. Err. (%)
200204.44.42.2
220223.53.51.6
240245.15.12.1
260262.72.71.0
2802844.01.4
3003066.02.0
320327.57.52.3
340345.65.61.6
360367.77.72.1
380387.67.62.0
Table 2. Final distance estimation result for Experiment 2.
Table 2. Final distance estimation result for Experiment 2.
Real Dist. (cm)Est. Dist. (cm)Abs. Err. (cm)Relat. Err. (%)
400412.212.23.1
420425.65.61.3
440452.612.62.9
460487.527.56.0
480495.115.13.1
500508.38.31.7
520543.123.14.4
540567.527.55.1
560568.08.01.4
580595.115.12.6
600615.715.72.6

Share and Cite

MDPI and ACS Style

Lázaro, J.L.; Cano, A.E.; Fernández, P.R.; Pompa, Y. Sensor for Distance Measurement Using Pixel Grey-Level Information. Sensors 2009, 9, 8896-8906. https://doi.org/10.3390/s91108896

AMA Style

Lázaro JL, Cano AE, Fernández PR, Pompa Y. Sensor for Distance Measurement Using Pixel Grey-Level Information. Sensors. 2009; 9(11):8896-8906. https://doi.org/10.3390/s91108896

Chicago/Turabian Style

Lázaro, José L., Angel E. Cano, Pedro R. Fernández, and Yamilet Pompa. 2009. "Sensor for Distance Measurement Using Pixel Grey-Level Information" Sensors 9, no. 11: 8896-8906. https://doi.org/10.3390/s91108896

Article Metrics

Back to TopTop