# Focus Assessment Method of Gaze Tracking Camera Based on ε-Support Vector Regression

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Motivation

#### 1.2. Related Works

## 2. Proposed Method

#### 2.1. Overview of the Proposed Method

#### 2.2. Linear Normalization of Image Brightness Based on Mathematical Analyses

^{2}Ihν based on [25]. As evident in Figure 5, we obtain the equations, ρ/R = F/D, and ρ = R(F/D), where R is the radius of aperture, F is the focal length of the lens and D is the distance from the lens plane to the entrance pupil plane. Therefore, E = πR

^{2}(F

^{2}/D

^{2})Ihν = R

^{2}I × (πF

^{2}hν/D

^{2}). In the case that the image is focused, D is fixed, and F is a constant. If the constant M denotes π F

^{2}hν/D

^{2}, we can obtain the following equation:

^{2}IM

_{p}/s)

_{p}is the area of one pixel of image sensor. If the image is an ideally-focused one, s becomes the area of one pixel of the image sensor, and s is the same as s

_{p}. If the image is a blurred one, s becomes the area of multiple pixels of the image sensor, which is larger than s

_{p}and symmetrical based on s. Considering our previous discussion on the difference of daytime and nighttime images, we denote E

_{d}and E

_{n}as the energies incident on the sensor in the case of a daytime image and a nighttime image, respectively. Furthermore, we denote A

_{d}and A

_{n}as the focus levels of the daytime image and nighttime image, respectively. Using the same f-number to capture these images, we obtain the same radius of aperture, R. By referring to Equation (1), we obtain E

_{d}= R

^{2}I

_{d}M, and E

_{n}= R

^{2}I

_{n}M. Therefore, by referring to Equations (1) and (2), we obtain A

_{d}= R

^{2}I

_{d}M(s

_{p}/s), and A

_{n}= R

^{2}I

_{n}M(s

_{p}/s). As explained previously, since the energy originates from a part of the NIR spectrum of solar radiation and the NIR light of illuminators, we can rewrite these equations as A

_{d}= R

^{2}(I

_{dS}+ I

_{dI})M(s

_{p}/s) and A

_{n}= R

^{2}(I

_{nS}+ I

_{nI})M(s

_{p}/s), where I

_{dS}and I

_{nS}denote the NIR light intensity of solar radiation during day and night, respectively; I

_{dI}and I

_{nI}denote the NIR light intensity of illuminators during day and night, respectively. Since we have used the same NIR illuminators in the daytime and nighttime cases, I

_{dI}= I

_{nI}= I

_{I}, we obtain the following equation:

_{d}/A

_{n}= (I

_{dS}+ I

_{I})/(I

_{nS}+ I

_{I}) = 1 + ΔI

_{S}/(I

_{nS}+ I

_{I})

_{S}= I

_{dS}− I

_{nS}is the difference between the light intensities of daytime and nighttime, and ΔI

_{S}is a large positive value. Therefore, A

_{d}/A

_{n}is larger than 1, and consequently, A

_{d}is larger than A

_{n}. Since A is defined as the energy of the high frequency component, which defines the focus level, we determine that the energy (A

_{d}) of the daytime of the high-frequency component, which defines the focus level, is larger than (A

_{n}) of the nighttime of the high-frequency component, which also defines the focus level. Therefore, the focus scores of daytime images are usually higher than those of nighttime images, as shown in Figure 3 and Figure 4.

_{I}is much larger than I

_{nS}, ΔI

_{S}/(I

_{nS}+ I

_{I}) is almost 0, and A

_{d}/A

_{n}is close to 1, which indicates that the focus scores of the daytime images can be similar to those of nighttime images. Since we used a brighter NIR illuminator while collecting Database 2 compared to Database 1 (see Section 3.1), I

_{I}of Database 2 is larger than I

_{I}of Database 1, and the consequent difference between the focus scores of daytime images and nighttime images is smaller in Database 2 compared to that in Database 1, as shown in Figure 3 and Figure 4.

_{1}and A

_{2}of Databases 1 and 2, respectively. Based on Equations (1) and (2), we obtain A

_{1}= R

_{1}

^{2}I

_{1}M(s

_{p}/s

_{1}), and A

_{2}= R

_{2}

^{2}I

_{2}M(s

_{p}/s

_{2}). In our experiments, Database 1 is captured using a camera of f-number 4, whereas Database 2 is acquired using a camera of f-number 10. Further, the f-number is usually proportional to the ratio of the focal length of the lens to the diameter of aperture. As shown in Figure 5, R is the radius of aperture, and F is the focal length of the lens. Therefore, F/R

_{1}= 4 in the case of Database 1, and F/R

_{2}= 10 in the case of Database 2, which implies R

_{1}= (10/4)R

_{2}= 2.5R

_{2}. As shown in Figure 5, ρ is proportional to R. Therefore, ρ

_{1}= 2.5ρ

_{2}, which results in s

_{1}= 2.5

^{2}s

_{2}because s = πρ

^{2}, as shown in Figure 5. In our experiments, the power of illuminators in Database 2 is twice that in Database 1, which implies, I

_{I}

_{1}= 0.5I

_{I}

_{2}. Assuming that the NIR energy of solar light is constant:

_{1}= I

_{S}+ I

_{I1}= (I

_{S}+ I

_{I2}) − 0.5I

_{I2}= I

_{2}− 0.5I

_{I2}

_{1}and A

_{2}is shown as follows:

_{2}) are larger than those of Database 1 (A

_{1}). The difference between these two high frequency components is ΔA = 0.5I

_{I}

_{2}hνs

_{p}, which depends on the power of illuminators (I

_{I}

_{2}).

_{S}(= I

_{dS}− I

_{nS}) or increasing I

_{I}. In order to decrease ΔI

_{S}, we apply linear normalization by compensating the brightness of the entire pixels of the daytime and nighttime images to adjust the average grey-level to be the same value. The result of decreasing ΔI

_{S}can be seen in Table 2 by comparing the results of “before linear normalization” and “after linear normalization”. On the other hand, in order to increase I

_{I}, we increase the power of illuminators in the gaze-tracking system. The result of this work is Database 2, in which we set the power of the illuminator to be two-times larger than that of the illuminator in Database 1. Figure 3 and Figure 4 show that the difference between daytime and nighttime images in Database 2 is smaller than that in Database 1. This result can be also observed in Table 2. The last right column (“Database 2” and “after linear normalization”) of Table 2 describes the result of decreasing ΔI

_{S}and increasing I

_{I,}simultaneously.

_{2}) of daytime images are theoretically larger than those (A

_{1}) of nighttime images if not considering other factors. To prove this, all of the mentioned factors of object distance, image distance, point spread function, etc., are actually set to be the same in both daytime and nighttime images of our experiments. Through this simplified derivation, in the case that all of the mentioned factors are the same, we found that the variation of high frequency components in the captured images even at the same position of Z distance can be reduced by decreasing the brightness change of captured images (ΔI

_{S}of Equation (3)), and it can be done by our linear normalization method. Considering that all of these factors for this theoretical derivation are so complicated, and we would do this derivation considering all of these factors in future work.

#### 2.3. Four Focus Measurements

_{i}represents the weight at the i-th level index. HH

_{i}and LL

_{i}are the high frequency (in both horizontal and vertical directions) sub-band and low frequency (in both horizontal and vertical directions) sub-band, respectively, at the i-th level index. The weight is required because the values of high-high (HH) components are very small and decrease according to the level of transformation. The focus score is passed through nonlinear normalization to obtain a focus score in the range of 0–1.

#### 2.4. ε-Support Vector Regression with a Symmetrical Gaussian Radial Basis Function Kernel for Combining Four Focus Scores

**x**, α)) = L(|y − f(

_{i}**x**, α)|

_{i}_{ε}), where y is the output,

**x**is the input pattern, α is a dual variable and f(

_{i}**x**, α) is the estimation function. |y − f(

_{i}**x**, α)|

_{i}_{ε}is zero if |y − f(

**x**, α)| ≤ ε, and the value of (|y − f(

_{i}**x**, α)| − ε) is obtained otherwise [26]. In order to estimate the regression function using SVR, we adjust two parameters: the value of ε-insensitivity and the regularization parameter C with the kernel type [27]. There are many types of kernel functions used in ε-SVR, such as linear, polynomial, sigmoid, RBF kernels, etc. In our research, we compared the accuracies of various kernels (see Section 3), and we used the RBF kernel in ε-SVR. The RBF kernel is described as a symmetrical Gaussian function, $k\left(\mathit{u},\text{}\mathit{v}\right)=\text{}{e}^{-\gamma {\left|\mathit{u}-\mathit{v}\right|}^{2}}$. With training data, the parameter γ is optimized to the value of 0.025. By changing the value of ε, we can control the sparseness of the SVR solution [27]. In our research, we set ε to be the optimal value of 0.001 with training data, and the regularization parameter C is set to 10,000.

_{i}**x**, consist of four elements, which are the focus scores obtained by four individual methods: Daugman’s kernel, Kang’s kernel, DWT and HWT. The input vectors (

_{t}**x**) are mapped through mapping function Φ(

_{i}**x**) onto the feature space, where the kernel function RBF can be computed. Φ(

_{i}**x**) is used for mapping the input vector (

_{i}**x**) in low dimension into the vector in high dimension. For example, the input vector in 2 dimensions is transformed into that in 3 dimensions by Φ(

_{t}**x**). That is because the possibility of separating the vectors in higher dimensions is greater than those in lower dimensions [26,27,28,29]. The function of Φ(

_{i}**x**) is not determined as one type, such as the sigmoid function, and any kinds of non-linear function can be used. The mapped vectors are sent to the symmetrical Gaussian RBF kernel: k(

_{i}**x**,

_{i}**x**) = RBF(Φ(

**x**), Φ(

_{i}**x**)). Subsequently, the kernel function values are weighted with w

_{i}to calculate the output focus score (fs

_{out}). In the last step, the weighted kernel function values are summed into the value P (=Σ

_{i}(w

_{i}k(

**x**,

_{i}**x**))), and subsequently, P is passed to the linear combination for the regression estimation fs

_{out}= σ(P) = P + b, where b is a scalar real value.

#### 2.5. Criterion for Comparing Performances of Different Focus Measurement Methods

_{CC}) is the range from the lower terminal of Point 1 to the upper terminal of Point 2 restricting the ROA as follows, shown in Figure 8b.

_{CC}+ ROA)

## 3. Experimental Results

#### 3.1. Two Experimental Databases

#### 3.2. Performance Comparison of Various Focus Measurement Methods

#### 3.3. Performance Comparison of Various Regressions with Four Focus Measurements

#### 3.4. Performance Comparison of Proposed Method and Four Individual Methods of Focus Measurement.

**x**of Section 2.4) by Daugman’s mask, Kang’s mask, Daubechies and Haar wavelet transform are not linearly changed according to the change of the Z distance, as shown in Figure 12, and these vectors are passing through nonlinear mapping function (Φ(

_{t}**x**) of Section 2.4) and nonlinear target function of RBF (k(

_{i}**x**,

_{i}**x**) of Section 2.4), the graphs of Figure 12 can be linear or nonlinear.

#### 3.5. Auto-Focusing Based on Our Focus Measurement Method

## 4. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Hansen, D.W.; Ji, Q. In the Eye of the Beholder: A Survey of Models for Eyes and Gaze. IEEE Trans. Pattern Anal. Mach. Intell.
**2010**, 32, 478–500. [Google Scholar] [CrossRef] [PubMed] - Duchowski, A.T. A Breadth-First Survey of Eye-Tracking Applications. Behav. Res. Methods Instrum. Comput.
**2002**, 34, 455–470. [Google Scholar] [CrossRef] [PubMed] - Morimoto, C.H.; Mimica, M.R.M. Eye Gaze Tracking Techniques for Interactive Applications. Comput. Vis. Image Underst.
**2005**, 98, 4–24. [Google Scholar] [CrossRef] - Zhu, Z.; Ji, Q. Novel Eye Gaze Tracking Techniques under Natural Head Movement. IEEE Trans. Biomed. Eng.
**2007**, 54, 2246–2260. [Google Scholar] [PubMed] - Cho, D.-C.; Kim, W.-Y. Long-range Gaze Tracking System for Large Movements. IEEE Trans. Biomed. Eng.
**2013**, 60, 3432–3440. [Google Scholar] [CrossRef] [PubMed] - Hennessey, C.; Noureddin, B.; Lawrence, P. A Single Camera Eye-Gaze Tracking System with Free Head Motion. In Proceedings of the Symposium on Eye Tracking Research & Applications, San Diego, CA, USA, 27–29 March 2006; pp. 87–94. [Google Scholar]
- Shih, S.-W.; Liu, J. A Novel Approach to 3-D Gaze Tracking Using Stereo Cameras. IEEE Trans. Syst. Man Cybern. Part B Cybern.
**2004**, 34, 234–245. [Google Scholar] [CrossRef] - F-Number. Available online: https://en.wikipedia.org/wiki/F-number (accessed on 7 June 2016).
- Daugman, J. How Iris Recognition Works. IEEE Trans. Circuits Syst. Video Technol.
**2004**, 14, 21–30. [Google Scholar] [CrossRef] - Kang, B.J.; Park, K.R. A Robust Eyelash Detection Based on Iris Focus Assessment. Pattern Recognit. Lett.
**2007**, 28, 1630–1639. [Google Scholar] [CrossRef] - Jang, J.; Park, K.R.; Kim, J.; Lee, Y. New Focus Assessment Method for Iris Recognition Systems. Pattern Recognit. Lett.
**2008**, 29, 1759–1767. [Google Scholar] [CrossRef] - Wan, J.; He, X.; Shi, P. An Iris Image Quality Assessment Method Based on Laplacian of Gaussian Operation. In Proceedings of the IAPR Conference on Machine Vision Applications, Tokyo, Japan, 16–18 May 2007; pp. 248–251. [Google Scholar]
- Grabowski, K.; Sankowski, W.; Zubert, M.; Napieralska, M. Focus Assessment Issues in Iris Image Acquisition System. In Proceedings of the International Conference on Mixed Design of Integrated Circuits and Systems, Ciechocinek, Poland, 21–23 June 2007; pp. 628–631. [Google Scholar]
- Zhang, J.; Feng, X.; Song, B.; Li, M.; Lu, Y. Multi-Focus Image Fusion Using Quality Assessment of Spatial Domain and Genetic Algorithm. In Proceedings of the Conference on Human System Interactions, Krakow, Poland, 25–27 May 2008; pp. 71–75. [Google Scholar]
- Wei, Z.; Tan, T.; Sun, Z.; Cui, J. Robust and Fast Assessment of Iris Image Quality. In Proceedings of the International Conference on Biometrics, Hong Kong, China, 5–7 January 2006; pp. 464–471. [Google Scholar]
- Kautsky, J.; Flusser, J.; Zitová, B.; Šimberová, S. A New Wavelet-based Measure of Image Focus. Pattern Recognit. Lett.
**2002**, 23, 1785–1794. [Google Scholar] [CrossRef] - Bachoo, A. Blind Assessment of Image Blur Using the Haar Wavelet. In Proceedings of the Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists, Bela, South Africa, 11–13 October 2010; pp. 341–345. [Google Scholar]
- Tong, H.; Li, M.; Zhang, H.; Zhang, C. Blur Detection for Digital Images Using Wavelet Transform. In Proceedings of the IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 27–30 June 2004; pp. 17–20. [Google Scholar]
- Daubechies Wavelet. Available online: https://en.wikipedia.org/wiki/Daubechies_wavelet (accessed on 25 May 2017).
- Daubechies, I. Ten Lectures on Wavelets, 1st ed.; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
- Haar Wavelet. Available online: https://en.wikipedia.org/wiki/Haar_wavelet (accessed on 25 May 2017).
- Ultraviolet. Available online: https://en.wikipedia.org/wiki/Ultraviolet (accessed on 13 January 2017).
- Visible Spectrum. Available online: https://en.wikipedia.org/wiki/Visible_spectrum#cite_note-1 (accessed on 13 January 2017).
- Infrared. Available online: https://en.wikipedia.org/wiki/Infrared (accessed on 13 January 2017).
- Angus, A.A. A New Physical Constant and Its Application to Chemical Energy Production. Fuel Chem. Div. Prepr.
**2003**, 48, 469–473. [Google Scholar] - Vapnik, V.N. The Nature of Statistical Learning Theory, 1st ed.; Springer: Berlin, Germany, 1995. [Google Scholar]
- Schölkopf, B.; Smola, A.J.; Williamson, R.C.; Bartlett, P.L. New Support Vector Algorithms. Neural Comput.
**2000**, 12, 1207–1245. [Google Scholar] [CrossRef] [PubMed] - Schölkopf, B.; Smola, A.J. Learning with Kernels-Support Vector Machines, Regularization, Optimization, and Beyond, 1st ed.; The MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
- Support Vector Machines. Available online: http://www.stanford.edu/class/cs229/notes/cs229-notes3.pdf (accessed on 7 June 2016).
- Bishop, C. Pattern Recognition and Machine Learning; Springer: Berlin, Germany, 2006. [Google Scholar]
- Haykin, S. Neural Networks: A Comprehensive Foundation, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
- Multilayer Perceptron. Available online: http://en.wikipedia.org/wiki/Multilayer_perceptron (accessed on 7 June 2016).
- Areerachakul, S.; Sanguansintukul, S. Classification and Regression Trees and MLP Neural Network to Classify Water Quality of Canals in Bangkok, Thailand. Int. J. Intell. Comput. Res.
**2010**, 1, 43–50. [Google Scholar] [CrossRef] - Wefky, A.M.; Espinosa, F.; Jiménez, J.A.; Santiso, E.; Rodriguez, J.M.; Fernández, A.J. Alternative Sensor System and MLP Neural Network for Vehicle Pedal Activity Estimation. Sensors
**2010**, 10, 3798–3814. [Google Scholar] [CrossRef] [PubMed] - Vehtari, A.; Lampinen, J. Bayesian MLP Neural Networks for Image Analysis. Pattern Recognit. Lett.
**2000**, 21, 1183–1191. [Google Scholar] [CrossRef] - Patino-Escarcina, R.E.; Costa, J.A.F. An Evaluation of MLP Neural Network Efficiency for Image Filtering. In Proceedings of the International Conference on Intelligent Systems Design and Applications, Rio de Janeiro, Brazil, 20–24 October 2007; pp. 335–340. [Google Scholar]
- Laser Rangefinder DLE70 Professional. Available online: http://www.bosch-pt.com/productspecials/professional/dle70/uk/en/start/index.htm (accessed on 19 January 2017).
- “Patriot”, Polhemus. Available online: http://www.polhemus.com/?page=Motion_Patriot (accessed on 24 March 2017).
- Viola, P.; Jones, M.J. Robust Real-time Face Detection. Int. J. Comput. Vis.
**2004**, 57, 137–154. [Google Scholar] [CrossRef]

**Figure 1.**Proposed gaze-tracking system for intelligent television (TV) interface. (

**a**) Gaze-tracking system; and (

**b**) detection results of pupil center and corneal specular reflection (SR) center in the captured image by narrow-view camera (displayed on the left monitor),and the detected area of the face in the captured image by wide-view camera (displayed on right monitor).

**Figure 3.**The effect of brightness variation on the spatial domain methods with (

**a**) Database 1 and (

**b**) Database 2. “Daugman D-1-B” and “Daugman N-1-B” are the graphs by Daugman’s mask using the daytime and nighttime images in Database 1, respectively. “Kang D-1-B” and “Kang N-1-B” are the graphs by Kang’s mask using the daytime and nighttime images in Database 1, respectively. Following the same notation, “Daugman D-2-B”, “Daugman N-2-B”, “Kang D-2-B” and “Kang N-2-B” represent the results obtained using Database 2.

**Figure 4.**The effect of brightness variation on the wavelet domain methods with (

**a**) Database 1 and (

**b**) Database 2.

**Figure 6.**Symmetrical kernels for measuring the focus score in the spatial domain-based methods. (

**a**) Daugman’s convolution kernel; and (

**b**) Kang’s convolution kernel.

**Figure 7.**ε-SVR with symmetrical Gaussian radial basis function (RBF) kernel architecture in the proposed method with four-element input vectors

**x**and a single output of the focus score.

_{t}**Figure 8.**An example graph of the average focus score according to the Z distance. (

**a**) Graph with the error bar of 2 standard deviations (2STD); (

**b**) graph with relative overlapping amount (ROA) and accurate amount (A

_{CC}) + ROA. As shown in Figure 3 and Figure 4, the capturing points are 125, 130, 135, 140, 145, 150 and 155 cm, respectively.

**Figure 10.**Graph of the average focus score according to the Z distance of the spatial domain methods and wavelet domain methods. (

**a**) Database 1; and (

**b**) Database 2.

**Figure 11.**The focus score graphs of ε-SVR, ν-SVR, LR and MLP using different kernels. (

**a**) Database 1; and (

**b**) Database 2.

**Figure 12.**The focus score graphs of the proposed method and the four individual methods of focus measurement. (

**a**) Database 1; (

**b**) Database 2.

Category | Focus Measurements in Spatial Domain | Focus Measurements in Wavelet Domain | Hybrid Method (Proposed Method) |
---|---|---|---|

Method | - -
- -
| - -
- -
- The blur amount is computed as the sum of the energies of low-high (LH) and high-low (HL) sub-bands [17]
| The proposed method is a combination of the spatial and wavelet methods using ε-SVR |

Advantages | - -
- The passing band in the frequency domain can be easily determined by changing the kernel coefficients
- -
- The focus score can be less affected by the change in brightness of the input image than in the wavelet domain
| - -
- Various frequency bands of the image can be examined for focus value by a wavelet transformation
- -
- Smaller processing time
| Higher accuracy of focus assessment compared to the spatial or wavelet domain methods |

Disadvantages | - -
- Less accurate focus assessment by not considering both low and high frequency components
- -
| - -
- It is not easier to determine the passing band in the frequency domain by selecting the kind of wavelet kernel and decomposition level
- -
- The focus score can be affected more by the change in brightness of the input image than the spatial domain
| - Training procedure is required |

**Table 2.**The average difference between focus scores of daytime and nighttime images: before and after using linear normalization.

Methods | Database 1 | Database 2 | ||
---|---|---|---|---|

Before linear Normalization | After Linear Normalization | Before Linear Normalization | After Linear Normalization | |

Daugman [9] | 4.08 | 3.43 | 2.99 | 2.51 |

Kang [10] | 4.81 | 3.93 | 3.78 | 1.95 |

DWT [11] | 1.97 | 1.93 | 0.32 | 0.18 |

HWT [18] | 5.98 | 4.78 | 2.16 | 0.86 |

**Table 3.**Average performance evaluation ratio (PER) (standard deviation) of four spatial domain methods and two wavelet domain methods.

Method | Database 1 | Database 2 |
---|---|---|

Daugman [9] | 0.25 (0.031) | −0.0007 (0.029) |

Entropy [13] | 0.37 (0.027) | 0.16 (0.03) |

Kang [10] | 0.18 (0.025) | 0.13 (0.026) |

LoG [12] | 0.35 (0.028) | 0.24 (0.029) |

DWT [11] | 0.05 (0.025) | −0.09 (0.03) |

HWT [18] | 0.08 (0.026) | −0.15 (0.025) |

**Table 4.**Average PER (standard deviation) values of ε-SVR, ν-SVR, LR and MLP using different kernels with the two databases.

Method | Kernel | Database 1 | Database 2 |
---|---|---|---|

ν-SVR | Linear | 0.0399 (0.029) | −0.2492 (0.03) |

Polynomial | 0.0434 (0.028) | −0.3036 (0.031) | |

Sigmoid | 0.0971 (0.026) | −0.0649 (0.029) | |

RBF | 0.0382 (0.027) | −0.2667 (0.032) | |

LR | 0.0186 (0.031) | −0.2381 (0.028) | |

ε-SVR | Linear | 0.0654 (0.03) | −0.2568 (0.027) |

Polynomial | 0.0382 (0.025) | −0.1798 (0.029) | |

Sigmoid | 0.1462 (0.028) | −0.2095 (0.027) | |

RBF | 0.0002 (0.025) | −0.3531 (0.026) | |

MLP | Linear | 0.0185 (0.028) | −0.2392 (0.029) |

Sigmoid | 0.0788 (0.029) | −0.3528 (0.03) | |

Tanh | 0.2286 (0.03) | −0.1530 (0.031) |

**Table 5.**Average PER (standard deviation) values of the proposed method and the four individual methods of the focus measurement with the two databases.

Method | Database 1 | Database 2 | Average |
---|---|---|---|

Daugman [9] | 0.25 (0.031) | −0.0007 (0.029) | 0.12465 (0.03) |

Kang [10] | 0.18 (0.025) | 0.13 (0.026) | 0.155 (0.026) |

DWT [11] | 0.05 (0.025) | −0.09 (0.03) | −0.02 (0.028) |

HWT [18] | 0.08 (0.026) | −0.15 (0.025) | −0.035 (0.026) |

Proposed method | 0.0002 (0.025) | −0.3531 (0.026) | −0.17645 (0.026) |

**Table 6.**Comparisons on MAEs of Z distance estimation by our method and previous methods (unit: cm).

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Luong, D.T.; Kang, J.S.; Nguyen, P.H.; Lee, M.B.; Park, K.R.
Focus Assessment Method of Gaze Tracking Camera Based on ε-Support Vector Regression. *Symmetry* **2017**, *9*, 86.
https://doi.org/10.3390/sym9060086

**AMA Style**

Luong DT, Kang JS, Nguyen PH, Lee MB, Park KR.
Focus Assessment Method of Gaze Tracking Camera Based on ε-Support Vector Regression. *Symmetry*. 2017; 9(6):86.
https://doi.org/10.3390/sym9060086

**Chicago/Turabian Style**

Luong, Duc Thien, Jeon Seong Kang, Phong Ha Nguyen, Min Beom Lee, and Kang Ryoung Park.
2017. "Focus Assessment Method of Gaze Tracking Camera Based on ε-Support Vector Regression" *Symmetry* 9, no. 6: 86.
https://doi.org/10.3390/sym9060086