Next Article in Journal
A Class-Independent Texture-Separation Method Based on a Pixel-Wise Binary Classification
Previous Article in Journal
Detection of Direction-Of-Arrival in Time Domain Using Compressive Time Delay Estimation with Single and Multiple Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of Camera and Optical System Parameters on the Uncertainty of Object Location Measurement in Vision Systems

by
Jacek Skibicki
*,
Anna Golijanek-Jędrzejczyk
and
Ariel Dzwonkowski
Faculty of Electrical and Control Engineering, Gdańsk University of Technology, Narutowicza 11/12, 80-233 Gdańsk, Poland
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(18), 5433; https://doi.org/10.3390/s20185433
Submission received: 31 July 2020 / Revised: 11 September 2020 / Accepted: 18 September 2020 / Published: 22 September 2020
(This article belongs to the Section Optical Sensors)

Abstract

:
The article presents the influence of the camera and its optical system on the uncertainty of object position measurement in vision systems. The aim of the article is to present the methodology for estimating the combined standard uncertainty of measuring the object position with a vision camera treated as a measuring device. The identification of factors affecting the location measurement uncertainty and the determination of their share in the combined standard uncertainty will allow determining the parameters of the camera operation, so that the expanded uncertainty is as small as possible in the given measurement conditions. The analysis of the uncertainty estimation presented in the article was performed with the assumption that there is no influence of any external factors (e.g., temperature, humidity, or vibrations).

1. Introduction

Vision systems have been very popular for over 20 years. They are used not only in military solutions (biometric systems, automatic missile guidance, reconnaissance systems) and technology [1,2] (among others, modern human–computer interfaces, examination of object features, sorting products, inspection of dimensions and contours, checking correctness and completeness of product performance, food control [3]), but also in medicine (laboratory tests [4], surgical procedures, telemedicine), cartography and ecology (site map analysis for mineral exploration or pollution monitoring), transport (rail [5,6,7,8,9,10], air), the exploration of the Earth and the Universe (interpretation of satellite and astronomical images), and security measures and surveillance, such as reading license plates, detecting explosives at airports, and monitoring crowd behaviour.
As it can be seen, due to their universality, these systems can be found in virtually every field of technology. The advantages of this technology include its increasing reliability, its increasing ease of use, its non-contact measurement method with high accuracy, and the decreasing cost of systems, the latter resulting in measurable economic benefits. A vision system is a set of cooperating electronic devices that is designed to enable visual inspection and analysis of an object or environment, similar to human eyesight.
Video and location measurements are a particular application of vision technology. The results of these measurements are used, among others, for detecting the position of a workpiece machined on a Computerised Numerical Control (CNC) machine or the displacement of contact wires in railway transport [11,12].
The measurement result should be presented together with the quality parameter of this measurement, which is either a measurement error or measurement uncertainty. The measurement uncertainty is a parameter related to the measurement result, defining the range of measured quantity in which the actual value of the measured quantity is located with a certain probability [13]. Only then do the results provide information about the quality of the measurements performed, and only then can they be compared with each other. The results presented without a qualitative measure are incomplete from the metrological point of view and therefore are useless. This results from both the requirements that have been valid for more than 20 years [13,14,15,16] and also from the fact that the measurement uncertainty has an advantage over the measurement error. The advantage is that more information about the measurement (distribution of measurement results), as well as about the value of the coverage factor and the probability of expansion is obtained, e.g., 95%. For this reason, regardless of whether the measured quantity is electric [17] (e.g., current [18,19] or power measurement [20,21,22]) or non-electric (e.g., temperature [23,24,25], pressure [26], mass flow [27,28,29], and time [7]), an estimate of the uncertainty of the measured value is provided.
The uncertainty determines the range surrounding the measurement result containing a large, predetermined portion of the results that can be attributed to the measured quantity. This interval is called the range of uncertainty of the measurement result.
It should be remembered that if there are systematic disturbances in the experiment, then according to good metrological practices and Guide to the Expression of Uncertainty in Measurement (GUM) recommendations [13,14,15,16], the observation results should first be corrected by introducing corrections to mitigate the influence of these disturbances, and then, uncertainty estimation analysis should be performed.
There are many works on vision systems or camera systems that assess the measurement uncertainty: mechanics, machine-vision [30,31,32], flow measurements [33,34,35], geoinformation sciences [36,37,38], or medicine [39]. Naturally, in each of the presented examples, the authors performed estimates of quality parameters, most often as measurement errors or standard deviations. At the same time, the impact of one particular parameter on the measurement accuracy was the most frequently studied factor.
You can also find publications on the assessment of uncertainty in measuring the location of an object, where the source is a camera treated as a measuring device [9,11,40]. Existing publications only cover a general discussion on this issue.
In the presented publication, the authors made a detailed analysis (based on experimental research) of the impact of several parameters, such as the brightness level of the recorded images, the sensitivity of the image sensor, and the focal length of the lens on the accuracy of object estimation on the image sensor.
Moreover, the article shows how to determine the complex uncertainty in measurement of the object’s location, which makes this article stand out from among the previous ones.
Naturally, the image measurement uncertainty in the camera is affected by image calibration and image dewarping. Numerous methods are currently used for image calibration, including performing object position calibration and colour calibration [41], with the use of the Total Least Square (TLS) and Feedforward Neural Network algorithm [42], and double-DAC (Digital to Analog Converter) interlaced image calibration (TIDAC) with the use of machine learning [43].
Moreover, in the case of dewarping, in addition to traditional algorithms using the extraction and segmentation of object features, several modified methods of removing image distortion are used, e.g., from thick to thorough drainage with the use of deep learning methods [44,45,46], a reliable estimation of curled lines of text [47], as well as employing Scale-Invariant Feature Transform (SIFT) transformation [48] or stochastic calculations in combination with the neuromorphic system [49].
The measurement method proposed by the authors does not require image calibration. The measurement result together with the uncertainty is obtained based only on the analysis of the recorded image and the characteristic dimensions of the measuring stand.
The identification of factors that cause the uncertainty of the position measurement and the determination of the degree of their contribution to the combined standard uncertainty of position measurement will allow for the camera parameters to be selected in such a way that the expanded uncertainty is as small as possible in the given measurement conditions.
The camera is not a typical measurement device, for which the producer determines the accuracy level and presents it in technical documentation. For this reason, it is necessary to estimate the measurement uncertainty in a different way. When measuring the position of an object in the video space with the measuring system, the camera uncertainty is understood as the uncertainty of measuring the object position on the image sensor. The purpose of this article is to present the methodology for estimating this uncertainty and provide a full metrological analysis, as well as determine the impact of selected parameters on the value of this uncertainty. This procedure is universal and can be used for any type of camera.
Section 2 of this article describes the measurement conditions when a video camera is used, and the assumptions of the experiment, whose results are analysed in the following chapters. In Section 3, the derivation of a theoretical equation for combined standard uncertainty in the measurement of an object position on the image sensor in x and y axes was presented. Then, the factors influencing the measurement uncertainty, such as the brightness level of the recorded images, sensitivity of the image sensor, and focal length of the lens are presented (Section 4). The fifth Section shows the influence of selected, considered factors on the real measurements results, based on the example of measurements of vertical displacements of the high-voltage overhead power line conductor. Some final conclusions are provided in Section 6.
Some of the results contained in this paper were partially presented in the book [50] written in the Polish language.

2. Principle of Measuring

The article analyses the factors affecting the uncertainty of the image position on the image sensor, using the stand whose diagram is shown in Figure 1.
For the discussed experiment, the aim of the analysis is to determine the uncertainty of the measurement of the object image position on the image sensor, as shown in Figure 2.
When measuring displacements in an optical way with the use of a camera, regardless of the position configuration and the related dependence on the measurement result, one of its components is always the position of the object image on the image sensor.
The following assumptions were made:
  • the same factors influence the accuracy of the point displacement on the image sensor in both axes,
  • constant mapping scale,
  • the camera is not affected by any external factors such as change in temperature, humidity, or vibrations.
For the considered configuration of the measurement stand, the value of the position of the object measured in the horizontal axis OX, determined based on the position of its image on the image sensor, is given by the following dependence:
x = x ( k F 1 ) .
Due to the parallelism of the object plane and the image plane, the analogous relationship is valid for displacements in the vertical axis OY:
y = y ( k F 1 ) .
The distance F between the optical centre of the lens and the image plane depends on the focal length of the lens and the scale of reproduction, and it is given by the dependence [11]:
F = k k 2 4 k f 2 .
The focal length of the lens f, even with fixed focal length lenses, is not a constant value but varies slightly depending on the current focus setting. There is a so-called focal length floating. This effect must be taken into account. Therefore, the focal length of the lens is determined indirectly based on an additional measurement for the current lens focus in accordance with the formula [11]:
f = k 2 + x w x w + x w x w .

3. Uncertainty of Measuring Objects with Imaging Camera

The uncertainty of the position measurement in the x-axis, according to the law of uncertainty propagation [13,15,16], is defined by the formula (for the y-axis the uncertainty analysis will be the same):
u ( x ) = ( x x ) 2 u ( x ) 2 + ( x k ) 2 u ( k ) 2 + ( x F ) 2 u ( F ) 2 .
The sensitivity coefficients in Formula (5) for the x-axis are respectively:
x x = k F 1
x k = x F
x F = x k F 2 .
By introducing the above-described sensitivity coefficients into Formula (5), the following dependence was obtained, defining the standard uncertainty of the object position measurement for the x-axis:
u ( x ) = ( k F 1 ) 2 u ( x ) 2 + ( x F ) 2 u ( k ) 2 + ( x k F 2 ) 2 u ( F ) 2 .
It is known that the distance F between the optical centre of the lens and the image plane depends on the focal length and the scale of the projection, and it is given by (3), where the focal length f describes Formula (4) in which xw is the image size of the reference object with known xw dimensions on the image sensor. The object is located at the distance F from the image sensor.
By introducing the formula for f (4) into the dependence (3), after the transformations, the following dependence was obtained:
F = k 2 ( 1 1 4 x w x w ( x w + x w ) 2 ) .
This formula links the distance F between the optical centre of the lens and the image plane to the image size of the reference object xw. In accordance with the law of uncertainty propagation, the uncertainty of the F determination was defined as follows (assuming there is no correlation between the uncertainties of measured values):
u ( F ) = ( F k ) 2 u ( k ) 2 + ( F x w ) 2 u ( x w ) 2 + ( F x w ) 2 u ( x w ) 2
where the sensitivity coefficients are given, respectively, by the formulas:
F k = 1 2 ( 1 1 4 x w x w ( x w + x w ) 2 )
F x w = k x w x w 2 x w 2 ( 1 2 x w x w + x w )
F x w = k x w x w 2 x w 2 ( 1 2 x w x w + x w ) .
After introducing Relations (12)–(14) into Formula (11) concerning the uncertainty of the distance measurement F, it will take the form:
u ( F ) = [ 1 2 ( 1 1 4 x w x w ( x w + x w ) 2 ) ] 2 u ( k ) 2 + [ k x w x w 2 x w 2 ( 1 2 x w x w + x w ) ] 2 u ( x w ) 2 + + [ k x w x w 2 x w 2 ( 1 2 x w x w + x w ) ] 2 u ( x w ) 2 .
Standard uncertainties in measuring the sizes k and xw result from the accuracy of measuring instruments and are described by the following formulas (assuming a uniform probability distribution):
u ( k ) = Δ k 3
u ( x w ) = Δ x w 3 .
The standard uncertainty of the xw measurement results from the possibility of determining the image dimension of the reference object on the image sensor. This size is the dimension of the object on the recorded image expressed in pixels, and one of the main sources of uncertainty will be the inaccuracy of reading the result by the experimenter.
First, we assume that the image size of the reference object on the image sensor xw can be determined in accordance with the following formula:
x w = n pix l pix .
We also assume that the experimenter, determining the image size of the object based on the assessment of the photographic frame registered by the camera, may make an error equal to one pixel, due to the perceptual abilities of human sight when assigning the boundary edge of the image of the reference object to a particular pixel. With such assumptions, the following formula can be written:
u ( x w ) = Δ x we 3 .
After introducing Dependencies (16)–(19) into Formula (15) on the uncertainty of the object position measurement in the horizontal axis x, it will look as follows:
u ( x ) = ( k F 1 ) 2 u ( x ) 2 + ( x F ) 2 ( Δ k 3 ) 2 + ( x k F 2 ) 2 [ [ 1 2 ( 1 1 4 x w x w ( x w + x w ) 2 ) ] 2 ( Δ k 3 ) 2 + [ k x w x w 2 x w 2 ( 1 2 x w x w + x w ) ] 2 ( Δ x w 3 ) 2 + + [ k x w x w 2 x w 2 ( 1 2 x w x w + x w ) ] 2 ( Δ x we 3 ) 2 ] .
The formula in the y-axis will be analogous. In Dependence (20) and the analogous formula for the y-axis, all their components can be determined except for u(x′) and u(y′), i.e., the uncertainty of the standard measurement of the object position on the image sensor in the x and y axes, the designation of which is the purpose of this article.

4. Factors Determining Uncertainty of Object Location Measurement

Factors affecting the uncertainty values in the standard measurement of the object position on the image sensor u(x′) and u(y′) in the x and y axes are:
  • registration parameters, translating into the brightness level of recorded images,
  • features of the camera and the optical system, such as the sensitivity of the image sensor and the focal length of the lens.
Due to the fact that the measurement experiment did not show any correlation between the above uncertainties, they can be treated as mutually independent. Therefore, the combined standard uncertainty u(x′) is described by the dependence [13]:
u ( x ) = u reg 2 ( x ) + u cam 2 ( x ) .
To estimate the uncertainty associated with the characteristics of the camera and the ucam(x′) optical system, one has to determine the effect of the current sensor sensitivity on the uncertainty in measuring the image position of the usns object (x′) and the effect of focal length change on the uncertainty while maintaining the uf(x′) scale, in accordance with the formula:
u cam ( x ) = u sns 2 ( x ) + u f 2 ( x ) .
The final dependence on the uncertainty u(x′) in measuring the position of the object image on the image sensor in the horizontal axis x will take the form:
u ( x ) = u reg 2 ( x ) + u sns 2 ( x ) + u f 2 ( x ) .
The uncertainty for the vertical axis y is determined analogously.
To verify the theoretical assumptions, a series of measurements showing the impact of particular factors on the uncertainty of the object position on the image sensor was performed. All the measurements were carried out for a fixed reproduction scale, for which the measuring range was ±12 cm in both axes, and the object measured was a 10 mm diameter round-shaped spot contrasting with the background. In total, for verification, 20 measurement series were performed, each of which contained 10,000 individual measurements. The tests were performed using a Basler 2D image camera, type acA 2040–180 kc, with the following basic technical data: sensor resolution: 2046 × 2046 px (4 Mpix); sensor size: 11.26 × 11.26 mm; dimension of a single pixel 5.5 × 5.5 μm; maximum registration speed: 180 fps.
In addition to the camera, several lenses were used. The focal length of the lenses was selected according to need. In order to reduce the impact of lens distortion on the measurement results, only lenses with a high degree of correction were used, so that their distortions, especially the barrel and pincushion distortions, were negligible and did not have a significant impact on the measurement results. This is how the need to calibrate the camera, which is necessary in the presence of lens distortions, was avoided [51,52]. A colour image in RGB standard was recorded. Then, a red channel was extracted from it (due to the low colour temperature of the light source-incandescent light), resulting in a black and white image in grayscale. Next, the colour threshold procedure was performed, as a result of which a binary image was obtained. Then, the image was subjected to morphological transformations (erosion and dilatation), as well as to partial filtration, in order to remove small artefacts and objects whose position was not measured. The centre of mass was taken as the position of the object image on the image sensor. All the above-mentioned operations, i.e., recording, processing and image analysis, were carried out using the LabVIEW software [53].
The view of the measuring stand is presented in Figure 3.

4.1. Uncertainties of the Camera Measurement Related to the ureg Image Registration Parameters (x′)

Uncertainty studies related to the image recording parameters ureg (x′) and the sensor parameters usns(x′) resulting from the intrinsic sensitivity of the sensor were carried out for the distance k = 1543.06 ± 0.87 mm, which required the use of the Helios 44-2 2/58 lens with the focal length f = 54.41 ± 0.04 mm. The distance F was 60.827 ± 0.044 mm. The quantities k, f, and F were measured and determined according to Formulas (3) and (4), using a Bosh GLM 80 laser rangefinder and FWP MADb 400 calliper. Measurement uncertainties for these quantities were determined in accordance with the principles presented in the standard [13].
The adjustment of recorded image parameters can be realised, as in the case of any image acquisition by using a video or photo camera, by changing three parameters: the exposure time, lens diaphragm, or sensor sensitivity. The exposure time of a single frame is conditioned by the dynamics of registered position changes, which is a superior parameter, determining the camera speed. Hence, it is not possible to adjust the image acquisition parameters by lengthening or shortening the exposure time, or at best, this possibility is very limited. Adjustment by changes of the lens aperture value does not affect the camera work and, therefore, it does not affect the measurement accuracy. However, such an effect occurs when the parameters of the acquisition image are adjusted by changing the sensor sensitivity.
In order to estimate the effect of image recording parameters on the measurement uncertainty of the camera, registrations were made for various settings of the exposure time of the frame and the aperture value of the lens, while maintaining a constant value of sensor sensitivity. These changes translated into different levels of brightness of the recorded image. The registration was carried out starting from the settings characteristic for the optimal level of image brightness, toward its increase and decrease. Brightness level changes were determined in relative EV units in relation to the optimal level. The EV scale is logarithmic; i.e., each increase in the brightness level by +1 EV means that twice as much light as before the change was applied to the camera sensor. Naturally, changing the EV value can be achieved either by adjusting the exposure time and aperture value of lens or by changing both these parameters simultaneously. The obtained measurement results are presented in Figure 4a. To improve the readability of the drawing, the measurements made for individual lighting levels are shown in different colours.
Figure 4b presents the uncertainty ureg(x′) for the measurement in the horizontal and vertical axis as a function of the brightness change of the recorded image, which is calculated as a standard deviation in accordance with the formula:
u reg ( x ) = 1 ( n 1 ) i = 1 n ( x i x ¯ ) 2 .
It can be observed that the obtained measurement results significantly depend on the change in the brightness of the recorded image, resulting from the change of registration parameters. Changing the brightness level within the range from −1 to +2 EV in relation to the level of optimal brightness increases the uncertainty ureg(x′) to the level that does not exceed twice the value obtained in optimal conditions. Such deterioration in the quality of the obtained results can be considered acceptable in technical measurements. A further change in the level of brightness causes an avalanche increase in the standard uncertainty ureg(x′), which is particularly visible in the case of its increase, where the value of +2.2 EV is the limit level, above which the correct interpretation of the recorded image is impossible. A change in the recording parameters in the direction of decreasing brightness causes a faster increase in the uncertainty ureg(x′), but with a smaller gradient of changes. For the level of −2 EV, its value is almost seven times higher than at the optimum level, and a change in brightness below −2.5 EV results in a such dark picture that it is impossible to make a measurement.
In order to minimise as far as possible the impact of changes in the recording parameters on the measurement result, they should be adjusted so that the brightness level of the recorded image is as close as possible to the uncertainty ureg(x′) level obtained.

4.2. Uncertainty of Measuring the Object Image Position on Image Sensor, Resulting from Parameters of Camera and Optical System ucam(x′)

In order to define the measurement uncertainty results of camera parameters ucam(x′), it is necessary to establish the influence of sensor sensitivity level on the uncertainty value usns(x′). The second factor will be the influence of lens focal length on the uncertainty of measuring the position of the object image on the image sensor with a constant scale of reproduction uf(x′).

4.2.1. Uncertainty of Measuring the Object Image Position on Camera Sensor due to Actual Sensor Sensitivity usns(x′)

Each image sensor is characterised by basic sensitivity. Its increase is obtained by amplifying the electrical signals from the photosensitive cells of the sensor. The higher the set sensitivity, the higher the required gain value. As the sensitivity increases, the image noise level also increases. Since noise is a random factor, which causes slight blurring of the image contour sharpness, it should be expected that due to the increase of sensor sensitivity, the uncertainty of measuring the object image position will also be increased.
To verify this, a series of experimental measurements were performed for the gradually increasing sensor sensitivity. All the other acquisition parameters were adjusted in such a way that the brightness of the recorded image was always the same. The measurement results from laboratory tests are shown in Figure 5.
The sensor sensitivity increase is presented in relative units as a multiplication of basic sensitivity hb. Analogous to Figure 4, registrations made at different sensor sensitivity levels were marked with different colours.
The uncertainty usns(x′) was determined as a standard deviation, analogously to Formula (24). The value of this uncertainty for both axes as a function of the sensor sensitivity level is shown in Figure 6.
Based on the characteristics presented in Figure 6, it can be observed that the uncertainty value usns(x′) slightly increases, together with the increase of the image sensor sensitivity. For the analysed case, the increase in horizontal axis can be described by the following dependence (result in [μm]):
u sns ( x ) = ( 0.00489 ± 0.00072 ) ( h h b ) + ( 0.1349 ± 0.0069 )
and analogously for the vertical axis:
u sns ( y ) = ( 0.0061 ± 0.0013 ) ( h h b ) + ( 0.125 ± 0.013 ) .
Between the basic sensor sensitivity level and the highest possible level of this sensitivity, which can be set for the considered camera type, the measurement uncertainty level usns(x′) increases by approximately 35%. It should be remembered that the presented results were obtained for a specific type of camera and lens. For another pair of devices, the results may be different. However, the experiment confirmed the supposition that the increase of measurement uncertainty usns(x′) due to the increase of image sensor sensitivity should be expected.

4.2.2. Uncertainty of Measuring the Object Image Position on Camera Sensor uf(x′), the Source of which is Lens Focal Length

The analyses presented above were carried out for a constant scale of reproduction, using a lens with a relatively short focal length. However, vision measurements can be performed from different distances. In order to maintain a constant scale of reproduction, the lens focal length should be appropriately adjusted. It is obvious that such changes will affect the uncertainty of measuring the object image position on the camera sensor because the micro-vibrations of the substructure and air movements, which are the sources of stochastic deviation of the measurement result, will have a stronger influence on the measuring system when the distance between the measuring object and the camera is increased. Increasing the distance made it necessary to use a lens with a longer focal length. As the focal length increases, the view angle of the lens decreases. However, if the view angle of the lens is smaller, the occurring disturbances will have a stronger impact on the momentary displacements of the object image on the image sensor.
To determine the influence of these changes, the experimental measurements were performed for the focal lengths of the optical system and the stand parameters presented in Table 1.
The exposure parameters were selected so that the brightness of the registered image was the same for all lenses. The laboratory experiments were performed for medium settings of image sensor sensitivity. The obtained measurement results are presented in Figure 7 and Figure 8. Analogously to Figure 4 and Figure 5, in Figure 7, the registrations made for each focal length of the lenses are shown in different colours.
The measurement results show that the measurement uncertainty increases several times as the lens focal length increases with a constant scale of reproduction. In the considered case, this increase in the horizontal axis can be described by the following formula (result in [μm] for the focal length f given in [mm]):
u f ( x ) = ( 0.000576 ± 0.000016 ) f + ( 0.1370 ± 0.0064 )
and for the vertical axis respectively:
u f ( y ) = ( 0.000782 ± 0.000037 ) f + ( 0.129 ± 0.015 ) .

4.3. Combined Standard Uncertainty u(x′) and u(y′)

The combined standard uncertainty in measuring the object image position on the camera sensor in the horizontal axis u(x′) and vertical axis u(y′), described by Formula (23), as a function of changes in the acquisition image brightness and lens focal length, for the considered case, takes the values as shown in Figure 9.
The influence of changing the image sensor sensitivity is also marked in Figure 9, but it is minimal compared to the impact of the other factors. The solid line indicates the measured dependencies, and the dashed line indicates the estimated ones. While analysing the presented results, it can be concluded that the dominant factor affecting the level of uncertainty is the brightness of the recorded frame, which is the result of the registration parameters, i.e., shutter speed and lens aperture. To reduce the level of uncertainty, the brightness level of the registered image should also be set at the level as close as possible to the optimum one.
The second parameter, in terms of the impact on the measurement uncertainty level, is the lens focal length. However, it should be noted that this parameter results from the requirements related to the subject of measurement and the spatial configuration of the stand, so that most often, it is impossible to change it, or such a change is possible within a very limited range.
The change of the image sensor sensitivity has the smallest impact on the value of the combined standard uncertainty level. In comparison to other factors, it is essentially irrelevant, so that striving to reduce sensitivity at the expense of the brightness level of the registered image would be completely unjustified.

5. Influence of Optical System Parameters on Real Measurements Results, on the Example of Vertical Displacement of the Overhead Power Line

To illustrate how the chosen optical system parameters influence the real measurements results, the measurements of the vertical movements of a high-voltage overhead power line conductor near the centre of the suspension span have been performed, as shown in Figure 10.
The purpose of the measurement was only to show the influence of the selected parameter of the vision system on the measurement results. Therefore, the subject of measurement, i.e., the displacement of the high-voltage (HV) power line conductor, should be treated as an example. At the same time, the selected measurement object shows the possibilities of vision measurement systems, which allow for measuring the displacements of an object, which are practically unmeasurable in another way.
The measurements were performed from a relatively long distance (k ≈ 115 m), which required using a very long focus lens (f ≈ 800 mm). Assuming that typical laboratory measurement devices are used, the obtained level of combined standard uncertainty calculated in accordance with Dependence (5) can be estimated at u(y) ≈ 0.8 mm. The exact values of geometrical parameters are not important in this case, because the purpose of the measurement was to show the impact of changes in the registration parameters on the measurement results.
Measurements were performed for an optimal brightness level of the registered image and the situation when the brightness was 1.5 EV lower than in the optimal one. The optimal image brightness level was determined in the same way as in laboratory measurements. Measurement results are presented in Figure 11.
The experimental results (from Figure 10) presented in Figure 11 confirm the analysis that was presented in this paper. When measurements are performed for an image brightness level 1.5 EV lower than the optimal one, a significantly larger stochastic distribution of the obtained measurement results is visible, which is consistent with the information shown in Figure 3 and Figure 8. Consequently, a reduction of illumination level results in the uncertainty level increasing two or three times in relation to the optimal conditions.

6. Summary and Conclusions

The theoretical analysis presented in the article confirmed by a practical experiment carried out in laboratory conditions showed that the level of uncertainty in the position measurement performed with the use of an image camera significantly depends on numerous factors resulting from both the hardware configuration of the measurement system and from its parameters, e.g., camera settings.
Therefore, a fixed value of uncertainty cannot be given, as is the case with most conventional measuring instruments.
As a result, it is justified to present in the article the methodology for determining the uncertainty of the object position on the image sensor.
It was also shown that the uncertainty analysis for video measurements is extremely important, as changes in camera and optical system parameters can change the level of achieved uncertainty even by one order of magnitude, which significantly affects the uncertainty of the final result for measurements carried out with vision techniques. Using these measurement methods, it is recommended to determine the expected level of uncertainty on the experimental path for a given case.
The conducted experiments showed that among the considered parameters, the level of image brightness has the greatest impact on the measurement uncertainty, as shown in Figure 9. Compared to it, the impact of sensor sensitivity changes is practically insignificant. In turn, the focal length of the lens most often depends on the specifics of the measured object and, as a rule, it cannot be chosen freely. Therefore, in order to ensure the lowest possible level of measurement uncertainty in a given case, it is first of all necessary to set the correct brightness of the recorded image by adjusting the aperture of the lens and the sensor sensitivity.
Determining the uncertainty in another way, e.g., by an analytical method, would be extremely troublesome, because it is impossible to designate individual components depending on Formula (23). The measurements will always be performed for a specific sensitivity level of the sensor, the focal length of the lens, and image recording parameters.
The analysis of the uncertainty of the position of the object in the vision systems presented in the article was performed with the assumption that the camera is not affected by any external factors such as changes in temperature, humidity, or additional vibrations. The analysis of the impact of these factors will be carried out as part of further experimental/research studies.

Author Contributions

Conceptualisation, J.S.; methodology, J.S., A.G.-J. and A.D.; software, J.S.; validation, A.G.-J. and A.D.; formal analysis, J.S.; writing—original draft preparation, J.S.; writing—review and editing, A.G.-J. and A.D.; visualisation, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

kdistance between the object plane and the image plane (image sensor);
Fdistance between the plane of the lens optical centre and the image plane
xdisplacement distance of the object measured in the horizontal axis in relation to the optical axis of the lens;
ydisplacement distance of the object measured in the vertical axis in relation to the optical axis of the lens;
xlocation of the image of the object measured in the horizontal axis;
ylocation of the image of the object measured in the vertical axis;
u(F)uncertainty of the F determination;
u(x)uncertainty of the position measurement in the x-axis;
u(y)uncertainty of the position measurement in the y-axis;
Δxdifference between the obtained result of a single measurement from the series and the averaged result of the entire static series of measurements in the x-axis;
Δydifference between the obtained result of a single measurement from the series and the averaged result of the entire static series of measurements in the y-axis;
ffocal length of the lens;
xwdimension of the reference object located at the distance k from the image sensor [mm];
xwimage dimension of the reference object on the image sensor [mm];
u(k)standard uncertainty in measuring the size k;
u(xw)standard uncertainty in measuring the size xw;
Δkmaximum limit error of instruments used to determine the distance k;
Δxwmaximum limit error of instruments used to determine the distance xw;
npixnumber of pixels [–];
lpixdimension of a single pixel [μm];
Δxweexperimenter’s uncertainty with which he is able to determine the image dimension of the reference object on the image sensor xw equal to the dimension of a single pixel lpix;
u(x′)combined standard uncertainty of the measurement of the image position on the image sensor in the horizontal axis x;
ureg(x′)component of camera measurement uncertainty related to image registration parameters;
ucam(x′)component of uncertainty associated with camera characteristics and the optical system;
usns(x′)uncertainty of measuring the object image position on the camera image sensor due to its actual sensitivity;
uf(x′)uncertainty of measuring the object image position on the camera sensor;
hactual image sensor sensitivity;
hbbasic image sensor sensitivity.

References

  1. Costa, P.B.; Leta, F.R.; Baldner, F.D.O. Computer vision measurement system for standards calibration in XY plane with sub-micrometer accuracy. Int. J. Adv. Manuf. Technol. 2019, 105, 1531–1537. [Google Scholar] [CrossRef]
  2. Cui, J.; Min, C.; Bai, X.; Cui, J. An Improved Pose Estimation Method Based on Projection Vector with Noise Error Uncertainty. IEEE Photonics J. 2019, 11, 1–16. [Google Scholar] [CrossRef]
  3. Brosnan, T.; Sun, D.-W. Improving quality inspection of food products by computer vision—A review. J. Food Eng. 2004, 61, 3–16. [Google Scholar] [CrossRef]
  4. Srivastava, B.; Anvikar, A.R.; Ghosh, S.K.; Mishra, N.; Kumar, N.; Houri-Yafin, A.; Pollak, J.J.; Salpeter, S.J.; Valecha, N. Computer-vision-based technology for fast, accurate and cost effective diagnosis of malaria. Malar. J. 2015, 14, 1–6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Vázquez, C.A.L.; Quintas, M.M.; Romera, M.M. Non-contact sensor for monitoring catenary-pantograph interaction. In Proceedings of the 2010 IEEE International Symposium on Industrial Electronics, Bari, Italy, 4–7 July 2010; pp. 482–487. [Google Scholar]
  6. Karwowski, K.; Mizan, M.; Karkosiński, D. Monitoring of current collectors on the railway line. Transport 2016, 33, 177–185. [Google Scholar] [CrossRef] [Green Version]
  7. Choi, M.; Choi, J.; Park, J.; Chung, W.K. State estimation with delayed measurements considering uncertainty of time delay. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA; pp. 3987–3992. [Google Scholar]
  8. Li, F.; Li, Z.; Li, Q.; Wang, D. Calibration of Three CCD Camera Overhead Contact Line Measuring System. In Proceedings of the 2010 International Conference on Intelligent Computation Technology and Automation, Changsha, China, 11–12 May 2010; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA; Volume 1, pp. 911–913. [Google Scholar]
  9. Liu, Z.; Liu, W.; Han, Z. A High-Precision Detection Approach for Catenary Geometry Parameters of Electrical Railway. IEEE Trans. Instrum. Meas. 2017, 66, 1798–1808. [Google Scholar] [CrossRef]
  10. Judek, S.; Jarzebowicz, L. Analysis of Measurement Errors in Rail Vehicles’ Pantograph Inspection System. Elektron. Elektrotech. 2016, 22, 20–23. [Google Scholar] [CrossRef] [Green Version]
  11. Skibicki, J. The issue of uncertainty of visual measurement techniques for long distance measurements based on the example of applying electric traction elements in diagnostics and monitoring. Measurement 2018, 113, 10–21. [Google Scholar] [CrossRef]
  12. Skibicki, J. Robustness of contact-less optical method, used for measuring contact wire position in changeable lighting conditions. Tech. Gaz. 2017, 24, 1759–1768. [Google Scholar]
  13. Standard, JCGM 100:2008: Evaluation of Measurement Data—Guide to the Expression of Uncertainty in Measurement; JCGM: Sèvres, France, 2008.
  14. Standard, JCGM 200:2012: International Vocabulary of Metrology—Basic and General Concepts and Associated Terms (VIM); JCGM: Sèvres, France, 2012.
  15. Taylor, J. Introduction to Error Analysis the Study of Uncertainties in Physical Measurements, 2nd ed.; University Science Books: New York, NY, USA, 1997. [Google Scholar]
  16. Warsza, Z. Methods of Extension Analysis of Measurement Uncertainty; PIAP: Warsaw, Poland, 2016; ISBN 978-83-61278-31-3. (In Polish) [Google Scholar]
  17. Bartiromo, R.; De Vincenzi, M. Uncertainty in electrical measurements. In Electrical Measurements in the Laboratory Practice. Undergraduate Lecture Notes in Physics; Springer: Berlin/Heidelberg, Germany, 2016; ISBN 978-3-319-31100-5. [Google Scholar] [CrossRef]
  18. Dzwonkowski, A. Estimation of the uncertainty of the LEM CV 3-500 transducers conversion function. Przegląd Elektrotech. 2015, 1, 13–16. [Google Scholar] [CrossRef] [Green Version]
  19. Klonz, M.; Laiz, H.; Spiegel, T.; Bittel, P. AC-DC current transfer step-up and step-down calibration and uncertainty calculation. IEEE Trans. Instrum. Meas. 2002, 51, 1027–1034. [Google Scholar] [CrossRef]
  20. Olmeda, P.; Tiseira, A.; Dolz, V.; García-Cuevas, L. Uncertainties in power computations in a turbocharger test bench. Measurement 2015, 59, 363–371. [Google Scholar] [CrossRef]
  21. Dzwonkowski, A.; Swędrowski, L. Uncertainty analysis of measuring system for instantaneous power research. Metrol. Meas. Syst. 2012, 19, 573–582. [Google Scholar] [CrossRef] [Green Version]
  22. Carullo, A.; Castellana, A.; Vallan, A.; Ciocia, A.; Spertino, F. Uncertainty issues in the experimental assessment of degradation rate of power ratings in photovoltaic modules. Measurement 2017, 111, 432–440. [Google Scholar] [CrossRef]
  23. Araújo, A. Dual-band pyrometry for emissivity and temperature measurements of gray surfaces at ambient temperature: The effect of pyrometer and background temperature uncertainties. Measurement 2016, 94, 316–325. [Google Scholar] [CrossRef]
  24. Jaszczur, M.; Pyrda, L. Application of Laser Induced Fluorescence in experimental analysis of convection phenomena. J. Physics Conf. Ser. 2016, 745, 032038. [Google Scholar] [CrossRef]
  25. Batagelj, V.; Bojkovski, J.; Ek, J.D. Methods of reducing the uncertainty of the self-heating correction of a standard platinum resistance thermometer in temperature measurements of the highest accuracy. Meas. Sci. Technol. 2003, 14, 2151–2158. [Google Scholar] [CrossRef]
  26. Sajben, M. Uncertainty estimates for pressure sensitive paint measurements. AIAA J. 1993, 31, 2105–2110. [Google Scholar] [CrossRef]
  27. Golijanek-Jędrzejczyk, A.; Mrowiec, A.; Hanus, R.; Zych, M.; Świsulski, D. Determination of the uncertainty of mass flow measurement using the orifice for different values of the Reynolds number. EPJ Web Conf. 2019, 213, 02022. [Google Scholar] [CrossRef]
  28. Zych, M.; Hanus, R.; Vlasák, P.; Jaszczur, M.; Petryka, L. Radiometric methods in the measurement of particle-laden flows. Powder Technol. 2017, 318, 491–500. [Google Scholar] [CrossRef]
  29. Roshani, G.; Hanus, R.; Khazaei, A.; Zych, M.; Nazemi, E.; Mosorov, V. Density and velocity determination for single-phase flow based on radiotracer technique and neural networks. Flow Meas. Instrum. 2018, 61, 9–14. [Google Scholar] [CrossRef]
  30. Christopoulos, V.N.; Schrater, P. Handling shape and contact location uncertainty in grasping two-dimensional planar objects. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 1557–1563. [Google Scholar]
  31. Hall, E.M.; Guildenbecher, D.R.; Thurow, B.S. Uncertainty characterization of particle location from refocused plenoptic images. Opt. Express 2017, 25, 21801–21814. [Google Scholar] [CrossRef] [PubMed]
  32. Myasnikov, V.V.; Dmitriev, E.A. The accuracy dependency investigation of simultaneous localization and mapping on the errors from mobile device sensors. Comput. Opt. 2019, 43, 492–503. [Google Scholar] [CrossRef]
  33. Wereley, S.T.; Meinhart, C.D. Recent Advances in Micro-Particle Image Velocimetry. Annu. Rev. Fluid Mech. 2010, 42, 557–576. [Google Scholar] [CrossRef] [Green Version]
  34. Westerweel, J.; Elsinga, G.E.; Adrian, R.J. Particle Image Velocimetry for Complex and Turbulent Flows. Annu. Rev. Fluid Mech. 2013, 45, 409–436. [Google Scholar] [CrossRef]
  35. Bhattacharya, S.; Vlachos, P.P. Volumetric particle tracking velocimetry (PTV) uncertainty quantification. arXiv 2019, arXiv:1911.12495. [Google Scholar] [CrossRef]
  36. Wu, Z.; Lina, Y.; Zhang, G. Uncertainty analysis of object location in multi-source remote sensing imagery classification. Int. J. Remote Sens. 2009, 30, 5473–5487. [Google Scholar] [CrossRef]
  37. Zhao, X.; Stein, A.; Chen, X.; Zhang, X. Quantification of Extensional Uncertainty of Segmented Image Objects by Random Sets. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2548–2557. [Google Scholar] [CrossRef]
  38. Cai, L.; Shi, W.; Miao, Z.; Hao, M. Accuracy Assessment Measures for Object Extraction from Remote Sensing Images. Remote Sens. 2018, 10, 303. [Google Scholar] [CrossRef] [Green Version]
  39. De Nigris, D.; Collins, D.L.; Arbel, T. Multi-Modal Image Registration Based on Gradient Orientations of Minimal Uncertainty. IEEE Trans. Med. Imaging 2012, 31, 2343–2354. [Google Scholar] [CrossRef]
  40. Judek, S.; Skibicki, J. Visual method for detecting critical damage in railway contact strips. Meas. Sci. Technol. 2018, 29, 055102. [Google Scholar] [CrossRef]
  41. Cioban, V.; Prejmerean, V.; Culic, B.; Ghiran, O. Image calibration for color comparison. In Proceedings of the 2012 IEEE International Conference on Automation, Quality and Testing, Robotics Automation Quality and Testing Robotics (AQTR), Cluj-Napoca, Romania, 24–27 May 2012; pp. 332–336. [Google Scholar]
  42. Li, M.; Haerken, H.; Guo, P.; Duan, F.; Yin, Q.; Zheng, X. Two-dimensional spectral image calibration based on feed-forward neural network. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 4194–4201. [Google Scholar]
  43. Beauchamp, D.; Chugg, K.M. Machine Learning Based Image Calibration for a Twofold Time-Interleaved High Speed DAC. In Proceedings of the 2019 IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS), Dallas, TX, USA, 4–7 August 2019; pp. 908–912. [Google Scholar]
  44. Sruthy, S.; Suresh Babu, S. Dewarping on camera document images. Int. J. Pure Appl. Math. 2018, 119, 1019–1044. [Google Scholar]
  45. Dasgupta, T.; Das, N.; Nasipuri, M. Multistage Curvilinear Coordinate Transform Based Document Image Dewarping using a Novel Quality Estimator. arXiv 2020, arXiv:2003.06872. [Google Scholar]
  46. Ramanna, V.; Bukhari, S.; Dengel, A. Document Image Dewarping using Deep Learning. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods, Prague, Czech Republic, 19–21 February 2019; pp. 524–531. [Google Scholar]
  47. Ulges, A.; Lampert, C.; Breuel, T. Document image dewarping using robust estimation of curled text lines. In Proceedings of the Eighth International Conference on Document Analysis and Recognition (ICDAR’05), Seoul, Korea, 31 August–1 September 2005; p. 1001. [Google Scholar]
  48. Stamatopoulos, N.; Gatos, B.; Pratikakis, I. A Methodology for Document Image Dewarping Techniques Performance Evaluation. In Proceedings of the 2009 10th International Conference on Document Analysis and Recognition, Barcelona, Spain, 26–29 July 2009; pp. 956–960. [Google Scholar]
  49. Molin, J.L.; Figliolia, T.; Sanni, K.; Doxas, I.; Andreou, A.; Etienne-Cummings, R. FPGA emulation of a spike-based, stochastic system for real-time image dewarping. In Proceedings of the IEEE Non-Volatile Memory System & Applications Symposium (NVMSA), Fort Collins, CO, USA, 2–5 August 2015; pp. 1–4, ISBN 9781467366885. [Google Scholar]
  50. Skibicki, J. Visual Measurement Methods in Diagnostics of Overhead Contact Line; Wydawnictwo Politechniki Gdańskiej: Gdańsk, Poland, 2018; p. 226. ISBN 978-83-7348-746-8. (In Polish) [Google Scholar]
  51. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef] [Green Version]
  52. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  53. Relf, C.G. Image Acquisition and Processing with LabVIEW; CRC Press: Boca Raton, FL, USA; London, UK; New York, NY, USA; Washington, DC, USA, 2004; ISBN 0-8493-1480-1. [Google Scholar]
Figure 1. Scheme of the measurement stand (top view): k—distance between the object plane and the image plane (image sensor); F—distance between the plane of the optical centre of the lens and the image plane; x—displacement distance of the object measured in the horizontal axis in relation to the optical axis of the lens; x′—location of the image of the object measured in the horizontal axis.
Figure 1. Scheme of the measurement stand (top view): k—distance between the object plane and the image plane (image sensor); F—distance between the plane of the optical centre of the lens and the image plane; x—displacement distance of the object measured in the horizontal axis in relation to the optical axis of the lens; x′—location of the image of the object measured in the horizontal axis.
Sensors 20 05433 g001
Figure 2. The subject of the analysis.
Figure 2. The subject of the analysis.
Sensors 20 05433 g002
Figure 3. View of measuring stand.
Figure 3. View of measuring stand.
Sensors 20 05433 g003
Figure 4. Influence of the image registration parameters: (a) stochastic distribution of the obtained measurement result depending on the brightness level of the recorded image; (b) uncertainty for the measurement in the horizontal axis ureg(x′) and vertical axis ureg(y′) in the function of changing the brightness of the recorded image.
Figure 4. Influence of the image registration parameters: (a) stochastic distribution of the obtained measurement result depending on the brightness level of the recorded image; (b) uncertainty for the measurement in the horizontal axis ureg(x′) and vertical axis ureg(y′) in the function of changing the brightness of the recorded image.
Sensors 20 05433 g004
Figure 5. Stochastic distribution of the obtained results as a function of sensor sensitivity level.
Figure 5. Stochastic distribution of the obtained results as a function of sensor sensitivity level.
Sensors 20 05433 g005
Figure 6. Dependence between uncertainty usns of measurement results from actual sensor sensitivity: (a) in horizontal axis usns(x′); (b) in vertical axis usns(y′).
Figure 6. Dependence between uncertainty usns of measurement results from actual sensor sensitivity: (a) in horizontal axis usns(x′); (b) in vertical axis usns(y′).
Sensors 20 05433 g006
Figure 7. Stochastic deviation of measurement results as a function of lens focal length.
Figure 7. Stochastic deviation of measurement results as a function of lens focal length.
Sensors 20 05433 g007
Figure 8. Standard measurement uncertainty as a function of lens focal length: (a) measurement results in the horizontal axis; (b) measurement results in the vertical axis.
Figure 8. Standard measurement uncertainty as a function of lens focal length: (a) measurement results in the horizontal axis; (b) measurement results in the vertical axis.
Sensors 20 05433 g008
Figure 9. Standard uncertainty of measuring the position of the object image on the camera sensor as a function of changes in the acquisition image brightness and lens focal length: (a) measurement uncertainty in horizontal axis x; (b) measurement uncertainty in vertical axis y.
Figure 9. Standard uncertainty of measuring the position of the object image on the camera sensor as a function of changes in the acquisition image brightness and lens focal length: (a) measurement uncertainty in horizontal axis x; (b) measurement uncertainty in vertical axis y.
Sensors 20 05433 g009
Figure 10. Measurement of the vertical movements of an HV overhead power line conductor—the principle of measurement.
Figure 10. Measurement of the vertical movements of an HV overhead power line conductor—the principle of measurement.
Sensors 20 05433 g010
Figure 11. Measurement results of vertical movements of an HV overhead power line conductor; (a) for optimal brightness level; (b) for brightness level lower by 1.5 EV than the optimal.
Figure 11. Measurement results of vertical movements of an HV overhead power line conductor; (a) for optimal brightness level; (b) for brightness level lower by 1.5 EV than the optimal.
Sensors 20 05433 g011
Table 1. Technical data of measurement devices for checking the influence of lens focal length and measurement distance for uncertainty of measuring the position of the object image on the camera sensor.
Table 1. Technical data of measurement devices for checking the influence of lens focal length and measurement distance for uncertainty of measuring the position of the object image on the camera sensor.
No.Lens Data
(Optical Set)
Focal Length f [mm]Distance k [mm]Distance F [mm]
Declared by ProducerMeasured
1Lydith 3.5/303032.044 ± 0.036839.46 ± 0.8733.370 ± 0.039
2Helios 44-2 2/585858.41 ± 0.041534.06 ± 0.8760.827 ± 0.044
3Jupiter 9 2/858584.988 ± 0.0462214.26 ± 0.8788.53 ± 0.05
4Jupiter 37 3.5/135135134.89 ± 0.063527.26 ± 0.87140.485 ± 0.066
5Telemegor 5.5/250250254.1 ± 0.16620.76 ± 0.87264.70 ± 0.11
6MC Sonnar 4/300300297.57 ± 0.127809.86 ± 0.87309.86 ± 0.13
7MC Sonnar 4/300 + K-6B 2x converter600559.12 ± 0.2214,549.46 ± 0.87582.43 ± 0.23
8MC MTO-11 CA 10/10001000933.84 ± 0.3624,404.46 ± 0.87972.61 ± 0.39

Share and Cite

MDPI and ACS Style

Skibicki, J.; Golijanek-Jędrzejczyk, A.; Dzwonkowski, A. The Influence of Camera and Optical System Parameters on the Uncertainty of Object Location Measurement in Vision Systems. Sensors 2020, 20, 5433. https://doi.org/10.3390/s20185433

AMA Style

Skibicki J, Golijanek-Jędrzejczyk A, Dzwonkowski A. The Influence of Camera and Optical System Parameters on the Uncertainty of Object Location Measurement in Vision Systems. Sensors. 2020; 20(18):5433. https://doi.org/10.3390/s20185433

Chicago/Turabian Style

Skibicki, Jacek, Anna Golijanek-Jędrzejczyk, and Ariel Dzwonkowski. 2020. "The Influence of Camera and Optical System Parameters on the Uncertainty of Object Location Measurement in Vision Systems" Sensors 20, no. 18: 5433. https://doi.org/10.3390/s20185433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop