You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

9 October 2021

On-Site Calibration Method for Line-Structured Light Sensor-Based Railway Wheel Size Measurement System

,
,
and
1
Key Laboratory of Luminescence and Optical Information, Ministry of Education, Beijing Jiaotong University, Beijing 100044, China
2
Dongguan Nannar Electronic Technology Company Ltd., Dongguan 523050, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue State-of-the-Art Optical Sensors Technology in China

Abstract

Line-structured light has been widely used in the field of railway measurement, owing to its high capability of anti-interference, fast scanning speed and high accuracy. Traditional calibration methods of line-structured light sensors have the disadvantages of long calibration time and complicated calibration process, which is not suitable for railway field application. In this paper, a fast calibration method based on a self-developed calibration device was proposed. Compared with traditional methods, the calibration process is simplified and the calibration time is greatly shortened. This method does not need to extract light strips; thus, the influence of ambient light on the measurement is reduced. In addition, the calibration error resulting from the misalignment was corrected by epipolar constraint, and the calibration accuracy was improved. Calibration experiments in laboratory and field tests were conducted to verify the effectiveness of this method, and the results showed that the proposed method can achieve a better calibration accuracy compared to a traditional calibration method based on Zhang’s method.

1. Introduction

In recent years, line-structured light vision sensors have been widely used in dynamic railway wheel size measurement systems [,,,,,]. For example, a high-accuracy line-structured light sensor-based wheel size measurement system was introduced in our previous work []. A line-structured light vision sensor is generally composed of a camera and a line laser projector. In the application of railway wheel size measurement, owing to the restriction of view angle, it is necessary to combine at least two sensors whose laser planes are coincident to obtain a whole wheel tread profile. Calibration is one of the crucial phases to realize the wheel parameters reconstruction by the acquired 2D laser strips, which is vital to improving the accuracy of the system. Generally, the calibration parameters of a line-structured light vision sensor consist of the intrinsic parameters of the camera and the light plane parameters. The calibration of camera intrinsic parameters has been wildly studied [,,,,,]; thus, this paper mainly focuses on the calibration of light plane parameters.
Xie [] used a planar target with grid lines to calibrate the intrinsic and light plane parameters simultaneously. During the calibrating process, the intersection points between the grid lines of the planar target and laser lines are extracted as calibration points. Liu [] adopted a ball target with high roundness to calibrate the laser plane. First, the spatial cone equation and the sphere equation of the ball target are solved. Then, the solution of the light plane equation is obtained by nonlinear optimization. Huynh [] created a V-shape 3D target for laser plane calibration. The sensor is mounted on an AGV to scan the target in calibration. The position of sensor related to the world coordinate frame is known. According to cross-ratio invariability, the laser plane equation can be solved by combining the 3D coordinates of points of the target. Xu [] employed a flat board target with four balls. The orientation of the board plane is first solved by the four balls, and then the intersection line between the board plane and the laser plane is obtained. The laser plane equation is fitted by these intersection lines. Xie [] similarly utilized a flat board target with squares pattern and solved the orientation of the board plane by the corner points. Differently, the angle of the board plane and laser plane is computed by an additional raised block on the board target. Wei [] proposed a method based on a 1D target. The feature points of the target are calculated in the camera coordinate frame using the known distance constraint of target pattern. Then, the nonlinear optimization method is used to solve the plane feature points and the light plane equation can be fitted.
The above methods have achieved good results in laboratory environment, but it is not suitable for railway field application. The calibration of an on-site railway wheel size measurement system has the following characteristics: (1) the available calibration time is limited to the busy railway operations; (2) the calibration accuracy is influence by the strong natural light on the outdoor environment; (3) the depth of field of vision sensors is short, making it difficult to shoot calibration markers placed on different locations. To achieve fast, high-accuracy on-site calibration of a wheel size measurement system, a new calibration method is demonstrated in this paper, and the above issues in field calibration are solved. This method shortens the calibration time, overcomes the problem caused by short depth of field, and does not need to extract laser lines, avoiding the influence of natural light. In order to realize the calibration method, a specific calibration device was developed. In calibration, the calibration device is mounted on the rail, and the calibration board plane is manually adjusted to coincide with the light plane. Then, the pixel coordinates of corner points are abstracted, and the fitting equations of image coordinates and lase plane coordinates are calculated. Finally, a calibration revising method based on epipolar constraint is employed to reduce the calibration error and improve the data fusion effect.
In Section 2, the calibration device and the principle of the proposed calibration method are introduced. In Section 3, a corner extraction method for calculating the calibration parameters is proposed, and the calibration errors caused by the extraction process are analyzed. Then, the revising method of calibration parameters is described in Section 4. The epipolar constraint is used to find matching points, laying a foundation for establishing constraint equations in calibration parameters calculation. In Section 5, the results of the physical experiment are presented, and the calibration accuracy is evaluated by comparison. Finally, conclusions are drawn in Section 6.

2. Calibration Principle

Figure 1 illustrates the setup of calibration by our method. Sensor 1 and sensor 2 are both line-structured light vision sensors; the two laser planes are carefully adjusted to be coincident for measuring the wheel tread size together. This on-site wheel tread size measurement system is demonstrated in our previous work []. The system can reach 0.11 mm theoretical measurement accuracy at the designed 300 mm work distance. The maximum frame rate of the camera is 20 fps, which is enough to meet the requirement of dynamic measurement under 48 km/h. When the train passes, the photoelectric switch triggers the camera to grab images. Then, the image is transmitted to computers and processed to extract laser stripes. Here, the purpose of the calibration is to establish a criterion of transforming the laser stripes to three-dimensional reconstruction profiles.
Figure 1. Schematic of the calibration process.
The calibration device is composed of a magnetic holder, a calibration board and an adjustable bracket composed of multiple cardan joints. The adjustable bracket allows the calibration board to move and rotate in space and be fixed when the adjustment is finished. During calibration, the magnetic holder is fixed on the rail and the plane of the calibration plate is placed to coincide with the light plane by adjusting the adjustable bracket. In experiment, the laser light covering the whole board can be seen as a sign that the coincidence degree of two planes meets the requirements. The calibration method only needs to take one shot of the calibration plate, and then the image of the calibration pattern is obtained by the camera, and the corner points in the calibration pattern are extracted.
The schematic of the line-structured light vision sensor is exhibited in Figure 2. In this figure, Owxwywzw represents the world coordinate frame (WCF), Ocxcyczc indicates the camera coordinate frame (CCF), and Ouxuyu refers to the image coordinate frame (ICF). Assume that an arbitrary point Pw = [xw,yw,zw,1]T in WCF has a projection Pu = [xu,yu,1]T in ICF. According to the camera imaging model and disregarding distortion, it can be expressed as:
s P u = A [ R t ] P w
where s denotes the size factor, A is the camera’s intrinsic parameters matrix, R and t refer to the rotation matrix and translation vector from WCF to CCF, respectively. The equation can realize the transformation from WCF to ICF. In order to reconstruct a 3D profile of the measured object, the equation is combined with the light plane equation to transform a coordinate from ICF to WCF, that is:
{ s P u = A [ R t ] P w a x w + b y w + c z w + d = 0
Figure 2. Schematic of the line-structured light vision sensor.
When line-structured light vision sensors are applied to measure the object size, the stipulation of WCF is irrelevant. The Owxwyw plane of WCF can be set as the light plane π. Thus, when reconstructing 3D profile, zw = 0. The functional relationship between (xw, yw) and (xu, yu) can be simply expressed as xw~(xu, yu), yw~(xu, yu), which can be obtained by fitting. In this paper, the selected polynomial basis is shown as follows:
{ x w = i m j = 0 i c i j x u i y u i j y w = i m j = 0 i q i j x u i y u i j
where m indicates the highest power of the polynomial. In our proposed calibration method, the calibration board plane is adjusted to coincide with the light plane. Therefore, a set of (xw,i,yw,i) and (xu,i, yu,i) used for fitting can be obtained by the manufacturing dimensions of the calibration board and the extraction of corner points. The coefficients of polynomials are acquired based on the least-square principle []:
t = ( V T V ) 1 V T L
where V represents the Vandermonde matrix.
V i j = x u , i k 1 y u , i k 2 , k = max ( k , k j m = 0 k m > 0 ) , k 1 = j m = 0 k m 1 , k 2 = k k 1
where L represents the vector [xw,i]T and [yw,i]T, and t denotes the vector of the coefficients of polynomials.

3. Corner Extraction and Influence of Image Noise

The Harris corner detection algorithm is widely used to detect corner points on the image. The basic idea of the algorithm is to use a fixed window to slide on the image and compare the change of gray values in the window before and after sliding. If there is a large gray change in any direction sliding, there will be a corner point in the window. Here, the Harris corner detection algorithm is employed to obtain the preliminary rough image coordinates (xu0, yu0) of corner points, as presented in Figure 3. The precise image coordinates of the corner points can be obtained by the following iterative process []:
( x u , i + 1 y u , i + 1 ) = ( w g y y w g x y w g x y w g x x ) ( w g x x x u + g x y y u w g x x x u + g x y y u ) | w g y y w g x y w g x y w g x x |
g x x ( x u , y u ) = g x 2 ω ( x u , y u ) , g x y ( x u , y u ) = g x g y ω ( x u , y u ) , g y y ( x u , y u ) = g v 2 ω ( x u , y u )
where w represents the detection window with the center of (xu,i,yu,i), gx(xu,yu) and gx(xu,yu) indicate gray gradients along xu and yu direction, respectively, and ω(xu,yu) denotes the two-dimensional Gaussian distribution function:
ω ( x u , y u ) = e ( x u x u , i ) 2 + ( y u y u , i ) 2 2 σ 2
Figure 3. Corner points detected by Harris corner detection algorithm.
In most cases, the iterative accuracy of 0.005 pixels can be achieved after two or three iterations. The iterative process is shown in Figure 4.
Figure 4. The iterative process of extracting corner points starting from the result of the Harris detection algorithm (blue point).
A standard calibration plate pattern image was generated by computer program to estimate the accuracy of the corner extraction. Gaussian noise was added to the standard image with a different noise level varies from 0 to 40 DB at an interval of 0.1 DB. For each noise level, 50 experiments were conducted, and the extraction error was computed and shown in Figure 5. It can be seen that the extraction accuracy increases as the noise decreases.
Figure 5. The mean extraction error of corner points at different noise levels.
For a real calibration image, there is an inevitable gradual change at the black-and-white boundary due to manufacturing reasons. Therefore, the noise level at corner areas is relatively higher than that at homogenous areas. For estimating the extraction accuracy of corner points, small areas around corner points were intercepted from the simulation calibration image and the real calibration image as shown in Figure 6. According to previous studies of image noise estimation [,,], the noise level of the acquired real calibration image at corner areas is equal to the simulation calibration image with 35 DB Gaussian noise. Therefore, the extraction accuracy of corner points for our setup is about 0.2 pixels. The calibration error caused by the image noise is simulated as shown in Figure 7. In the simulation, the calibration plate, the square of the calibration plate, the image size was set to 100 × 100 mm, 5 × 5 mm and 1000 × 1000 pixel, respectively. The extraction error of 0.2 pixels was set to random different directions, and then the mean calibration error was calculated.
Figure 6. Comparison between the simulation image and the real image (local region nearby the corner point). (a) Simulation image (at 35 DB noise level); (b) real image.
Figure 7. Calibration error caused by the image noise (35 DB).
In this experiment, the calibration error caused by the corner extraction error is small in the plate area (Xu and Yu direction in 0–1000 pixels range) and increases rapidly out of this area. Therefore, the calibration plate should include the whole measurement range of the sensor to obtain a higher calibration accuracy.
Consider that the image noise level varies with camera parameters, shutter speed and amount of ambient light, the extraction error also varies in different application environment. Extra simulation experiments were conducted, and the average calibration error in the plate area caused by different extraction error were calculated. As shown in Figure 8, when the extraction error is up to 0.5 pixels, which only happens at extreme image noise level, the average calibration error in the plate area is 0.025 mm. At this case, the calibration setup should be adjusted to obtain a lower image noise level.
Figure 8. Mean calibration error in the plate area with different extraction error.

4. Calibration Revising Based on Epipolar Constraint

When the measured object is a train wheel, it has to combine at least two line-structured vision sensors with co-planar laser planes because of the restriction of view angle. In practice, it is difficult to adjust the laser planes to be completely co-planar, and there is always a small angle between them. Thus, the calibration planes cannot be adjusted to be co-planar with both laser planes, leading to a certain calibration error. This calibration error leads to misalignment of reconstructed sections, which will bring problems to further calculation.
In order to decrease these calibration errors, an epipolar constraint-based revising method was employed. First, the matching points of two acquired images are found by the epipolar constraint. Then, additional constraint equations based on matching points are added to the calculation of calibration parameters. The constraint of image point and camera optical center is formed in the projection model when the same point is projected onto two images with different viewing angles. As shown in Figure 9, the line O1O2 connecting the optical centers of the two cameras is called baseline, the intersection points of the baseline and image planes (e1 and e2) are called base points, and the plane O1O2P is called polar plane. If the projection point of P on image1 and image2 is denoted as P1 and P2, the projection point P2 must be on the intersection line e2P2 of polar plane O1O2P and image2 plane. The intersection line e2P2 is called the epipolar line.
Figure 9. Geometric model of the epipolar constraint.
The epipolar constraint can be expressed as:
p k T F p k = 0   ( k = 1 , 2 , , n )
where p i = ( x u , k , y u , k , 1 ) and p i = ( x u , i , y u , i , 1 ) indicate the projection points on image1 and image2 of the same point. The basic matrix F can be solved based on the least-square principle and the corner points extracted in the calibration process. For a point p on image2, the epipolar line L1 of camera1 can be expressed as:
L 1 = F p
Regarding a certain object captured by the line-structured light vision sensor, the corresponding point p k on image1 of the point p k on image2 must be the intersectionpoint of the laser stripe on image1 and the epipolar line L1, which is useful to find matching points. In experiment, the captured object is the train wheel. Based on these matchingpoints, constraint equations are introduced into Equation (3), which can be expressed as:
{ i m j = 0 i c i j x u , k i y u , k i j = i m j = 0 i c i j x u , k i y u , k i j i m j = 0 i q i j x u , k i y u , k i j = i m j = 0 i q i j x u , k i y u , k i j
The matching points were found according to the epipolar constraint and exhibited in Figure 10a together with the corresponding epipolar lines. The results of the calibration revising process are presented in Figure 10b. Since the matching points are introduced to calculate calibration parameters, the corresponding parts of the reconstructed profiles become coincident. After choosing enough and proper matching points, the reconstructed profiles of the two sensors are coincident, making it more accurate for further calculation.
Figure 10. Calibration revising results based on different amounts of matching points. (a) Matching points of two images (matching points are connected by yellow lines and green lines are epipolar lines). (b) Reconstructed profiles after calibration parameters being revised according to the matching points.

5. Physical Experiment

The line-structured light vision sensor-based wheel size measurement system was introduced in our previous paper []. In the experiment, the wheel size measurement system was calibrated by the proposed calibration method and a comparison method.
The calibration arrangement is shown in Figure 11. The two laser planes are carefully adjusted to make them as coplanar as possible. The pixel size of the camera is 4.4 × 4.4 μm, the image resolution is 1236 × 1626 pixels, and the lens focal length is 16 mm. The cameras have a FOV of 180 × 135 mm at a working distance of 300 mm. The size of the calibration plate is 160 × 60 mm, the square size is 5 × 5 mm, and the manufacturing precision is 0.003 mm. Furthermore, our proposed method is compared with another method based on Zhang’s method [] to verify its efficiency.
Figure 11. The arrangement of calibration. (1) Camera window; (2) laser window; (3) calibration board; (4) photoelectric switch.
In the first experiment, the line-structured light vision sensor is calibrated by our proposed method. The plane of the calibration plate is adjusted to coincide with the light plane, and the calibration pattern is adjusted to contain the measuring area, as to reduce the calibration error caused by the corner extraction error. The calibration polynomial coefficients before and after epipolar constraint revising are displayed in Table 1.
Table 1. Calibration polynomial coefficients of the proposed method.
In the second experiment, the calibration plate is placed in different locations and directions 12 times. The cameras grab two images each time: one shot with natural light and the other with laser light. The intrinsic parameters of the camera are solved by Zhang’s method using the images in natural light, the extrinsic parameters (representing the locations and directions of the calibration plate) are also calculated. The laser stripes on images are extracted and the coordinates of the laser stripes in CCF can be solved according to the extrinsic parameters. Then, the laser plane equation in CCF can be obtained by fitting these coordinates as a plane, and the calibration is completed. The images used for the calibration are displayed in Figure 12. The extrinsic parameters and the fitted laser plane are illustrated in Figure 13. The calibration results are exhibited in Table 2.
Figure 12. Images used for calibration in the second experiment.
Figure 13. The extrinsic parameters and the fitting laser plane (red) in the second experiment.
Table 2. The calibration results are displayed in experiment 2.
Furthermore, a planar target with grid lines in a horizontal direction is adopted to compare the two calibration methods. The target is placed in the measuring region of the line-structured light vision sensor three times with different orientations. The coordinates of intersection points between the laser stripe and the grid lines are extracted from images. These coordinates are transformed to CCF or WCF by the two calibration methods separately. Then, the distance of intersection points and the angle between the laser stripe and the grid lines are calculated. Additionally, the widths of grid lines on the planar target are calculated as wm. The fabricated widths of grid lines with a precision of 0.01 mm are regarded as ideal widths wi. In this experiment, three pairs of intersection points on the planar target are selected each time. The comparison of wm and wi are displayed in Table 3.
Table 3. Analysis of calibration accuracy (mm).
The calibration accuracy of the proposed method before epipolar constraint revising is approximately 0.052 and 0.057 mm under a measurement range of 150 × 50 mm on camera 1 and camera 2, respectively. After epipolar constraint revising, the calibration accuracy of the proposed method is improved to 0.034 and 0.033 mm. Moreover, the calibration accuracy of the compared method in experiment 2 is 0.048 mm. As revealed by checking the used images, the calibration accuracy of the compared method is relatively low versus that of the proposed method due to the image blur caused by the short depth of field.
To verify the reproducibility of our method, the calibration device was removed and then reinstalled four times. Each time, the calibration parameters were recalculated and revised by epipolar constrain. The relative error compared to the experiment 1 at different pixel coordinates are shown in Figure 14. The maximal relative error of the four measurements is 0.008 mm, that is, the repeatability error is within 0.016 mm.
Figure 14. The repeatability experiment results. (ad) represent the relative errors of the four experiments compared with experiment 1 at different pixel coordinates.

6. Conclusions

The coordinates of the calibration plate can represent the coordinates of the laser plane when the calibration plate plane coincides with the laser plane. Based on this feature, a fast line-structured light vision sensor calibration method is proposed in this paper. In addition, the calibration error is revised based on the epipolar constraint to improve the accuracy of calibration. The basic principle and the implementation of the proposed method are described in detail. Then, the proposed method is validated by experiments.
The advantages of the proposed method are described as follows. (1) The proposed method is easy to perform and time-saving, suitable for line-structured light vision sensors used on special environment which is hard to maintain such as the railway site. (2) The proposed method does not need to extract laser lines on images and can adapt to the outdoor environment under strong natural light. (3) The proposed method can avoid the image blur caused by the depth of field as one image on the working distance is enough to accomplish the calibration process.

Author Contributions

Conceptualization, Y.R., Q.H. and Q.F.; methodology, Y.R., Q.F. and J.C.; software, Y.R.; validation, Y.R., Q.H. and J.C.; data curation, Y.R.; writing—original draft preparation, Y.R.; writing—review and editing, Q.H. and Q.F.; supervision, Q.F.; project administration, Q.F.; funding acquisition, Q.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (no. 51935002) and the introduced innovative R&D team of Dongguan: “Train wheelset geometric parameters intelligent testing and entire life-cycle management system development and industrial application innovative research team” (no. 201536000600028).

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bernal, E.J.; Martinod, R.M.; Betancur, G.R.; Castañeda, L.F. Partial-profilogram reconstruction method to measure the geometric parameters of wheels in dynamic condition. Veh. Syst. Dyn. 2016, 54, 606–616. [Google Scholar] [CrossRef]
  2. Chen, X.; Sun, J.; Liu, Z.; Zhang, G. Dynamic tread wear measurement method for train wheels against vibrations. Appl. Opt. 2015, 54, 5270–5280. [Google Scholar] [CrossRef]
  3. Cheng, X.; Chen, Y.; Xing, Z.; Li, Y.; Qin, Y.; Morales, R. A Novel Online Detection System for Wheelset Size in Railway Transportation. J. Sens. 2016, 2016, 9507213. [Google Scholar] [CrossRef]
  4. Pan, X.; Liu, Z.; Zhang, G. Reliable and Accurate Wheel Size Measurement under Highly Reflective Conditions. Sensors 2018, 18, 4296. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Pan, X.; Liu, Z.; Zhang, G.J. On-Site Reliable Wheel Size Measurement Based on Multisensor Data Fusion. IEEE Trans. Instrum. Meas. 2019, 68, 4575–4589. [Google Scholar] [CrossRef]
  6. Xing, Z.; Chen, Y.; Wang, X.; Qin, Y.; Chen, S. Online detection system for wheel-set size of rail vehicle based on 2D laser displacement sensors. Optik 2016, 127, 1695–1702. [Google Scholar] [CrossRef]
  7. Ran, Y.; He, Q.; Feng, Q.; Cui, J. High-Accuracy On-Site Measurement of Wheel Tread Geometric Parameters by Line-Structured Light Vision Sensor. IEEE Access 2021, 99, 52590–52600. [Google Scholar] [CrossRef]
  8. Chen, B.; Pan, B. Camera calibration using synthetic random speckle pattern and digital image correlation. Opt. Lasers Eng. 2020, 126, 105919. [Google Scholar] [CrossRef]
  9. He, H.; Li, H.; Huang, Y.; Huang, J.; Li, P. A novel efficient camera calibration approach based on K-SVD sparse dictionary learning. Measurement 2020, 159, 107798. [Google Scholar] [CrossRef]
  10. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef] [Green Version]
  11. Wong, K.Y.K.; Zhang, G.; Chen, Z. A stratified approach for camera calibration using spheres. IEEE Trans. Image Process. 2011, 20, 305–316. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Zhang, H.; Wong, K.Y.K.; Zhang, G. Camera calibration from images of sphere. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 499–503. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Zhang, Z. Camera calibration with one-dimensional objects. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 892–899. [Google Scholar] [CrossRef] [PubMed]
  14. Xie, Z.; Wang, X.; Chi, S. Simultaneous calibration of the intrinsic and extrinsic parameters of structured-light sensors. Opt. Lasers Eng. 2014, 58, 9–18. [Google Scholar] [CrossRef]
  15. Liu, Z.; Li, X.; Li, F.; Zhang, G. Calibration method for line-structured light vision sensor based on a single ball target. Opt. Lasers Eng. 2015, 69, 20–28. [Google Scholar] [CrossRef]
  16. Huynh, D.Q.; Owens, R.A.; Hartmann, P.E. Calibrating a Structured Light Stripe System: A Novel Approach. Int. J. Comput. Vis. 1999, 33, 73–86. [Google Scholar] [CrossRef]
  17. Xu, J.; Douet, J.; Zhao, J.; Song, L.; Chen, K. A simple calibration method for structured light-based 3D profile measurement. Opt. Laser Technol. 2013, 48, 187–193. [Google Scholar] [CrossRef]
  18. Xie, Z.X.; Zhu, W.T.; Zhang, Z.W.; Jin, M. A novel approach for the field calibration of line structured-light sensors. Measurement 2009, 43, 190–196. [Google Scholar] [CrossRef]
  19. Wei, Z.; Cao, L.; Zhang, G. A novel 1D target-based calibration method with unknown orientation for structured light vision sensor. Opt. Laser Technol. 2009, 42, 570–574. [Google Scholar] [CrossRef]
  20. Min, T.; Qin, X.; Zhao, F. Numerical Analysis, 2nd ed.; China Science and Culture Press: Beijing, China, 2003; p. 68. [Google Scholar]
  21. Camera Calibration Toolbox for Matlab. Available online: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html (accessed on 14 October 2015).
  22. Aja-Fernández, S.; Vegas-Sánchez-Ferrero, G.; Martín-Fernández, M.; Alberola-López, C. Automatic noise estimation in images using local statistics. Additive and multiplicative cases. Image Vis. Comput. 2008, 27, 756–770. [Google Scholar] [CrossRef]
  23. Fang, Z.; Yi, X. A novel natural image noise level estimation based on flat patches and local statistics. Multimed. Tools. Appl. 2019, 78, 17337–17358. [Google Scholar] [CrossRef]
  24. Jiang, P.; Zhang, J.Z. Fast and reliable noise level estimation based on local statistic. Pattern Recognit. Lett. 2016, 78, 8–13. [Google Scholar] [CrossRef]
  25. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.