Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (13)

Search Parameters:
Keywords = pupil center localization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 4060 KiB  
Article
Real-Time Pupil Localization Algorithm for Blurred Images Based on Double Constraints
by Shufang Qiu, Yi Wang, Zeyuan Liu, Huaiyu Cai and Xiaodong Chen
Sensors 2025, 25(6), 1749; https://doi.org/10.3390/s25061749 - 12 Mar 2025
Viewed by 518
Abstract
Accurate pupil localization is crucial for the eye-tracking technology used in monitoring driver fatigue. However, factors such as poor road conditions may result in blurred eye images being captured by eye-tracking devices, affecting the accuracy of pupil localization. To address the above problems, [...] Read more.
Accurate pupil localization is crucial for the eye-tracking technology used in monitoring driver fatigue. However, factors such as poor road conditions may result in blurred eye images being captured by eye-tracking devices, affecting the accuracy of pupil localization. To address the above problems, we propose a real-time pupil localization algorithm for blurred images based on double constraints. The algorithm is divided into three stages: extracting the rough pupil area based on grayscale constraints, refining the pupil region based on geometric constraints, and determining the pupil center according to geometric moments. First, the rough pupil area is adaptively extracted from the input image based on grayscale constraints. Then, the designed pupil shape index is used to refine the pupil area based on geometric constraints. Finally, the geometric moments are calculated to quickly locate the pupil center. The experimental results demonstrate that the algorithm exhibits superior localization performance in both blurred and clear images, with a localization error within 6 pixels, an accuracy exceeding 97%, and real-time performance of up to 85 fps. The proposed algorithm provides an efficient and precise solution for pupil localization, demonstrating practical applicability in the monitoring of real-world driver fatigue. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

14 pages, 3665 KiB  
Article
An Irregular Pupil Localization Network Driven by ResNet Architecture
by Genjian Yang, Wenbai Chen, Peiliang Wu, Jianping Gou and Xintong Meng
Mathematics 2024, 12(17), 2703; https://doi.org/10.3390/math12172703 - 30 Aug 2024
Cited by 1 | Viewed by 944
Abstract
The precise and robust localization of pupils is crucial for advancing medical diagnostics and enhancing user experience. Currently, the predominant method for determining the center of the pupil relies on the principles of multi-view geometry, necessitating the simultaneous operation of multiple sensors at [...] Read more.
The precise and robust localization of pupils is crucial for advancing medical diagnostics and enhancing user experience. Currently, the predominant method for determining the center of the pupil relies on the principles of multi-view geometry, necessitating the simultaneous operation of multiple sensors at different angles. This study introduces a single-stage pupil localization network named ResDenseDilateNet, which is aimed at utilizing a single sensor for pupil localization and ensuring accuracy and stability across various application environments. Our network utilizes near-infrared (NIR) imaging to ensure high-quality image output, meeting the demands of most current applications. A unique technical highlight is the seamless integration of the efficient characteristics of the Deep Residual Network (ResNet) with the Dense Dilated Convolutions Merging Module (DDCM), which substantially enhances the network’s performance in precisely capturing pupil features, providing a deep and accurate understanding and extraction of pupil details. This innovative combination strategy greatly improves the system’s ability to handle the complexity and subtleties of pupil detection, as well as its adaptability to dynamic pupil changes and environmental factors. Furthermore, we have proposed an innovative loss function, the Contour Centering Loss, which is specifically designed for irregular or partially occluded pupil scenarios. This method innovatively calculates the pupil center point, significantly enhancing the accuracy of pupil localization and robustness of the model in dealing with varied pupil morphologies and partial occlusions. The technology presented in this study not only significantly improves the precision of pupil localization but also exhibits exceptional adaptability and robustness in dealing with complex scenarios, diverse pupil shapes, and occlusions, laying a solid foundation for the future development and application of pupil localization technology. Full article
Show Figures

Figure 1

27 pages, 4723 KiB  
Review
Methods for Detecting the Patient’s Pupils’ Coordinates and Head Rotation Angle for the Video Head Impulse Test (vHIT), Applicable for the Diagnosis of Vestibular Neuritis and Pre-Stroke Conditions
by G. D. Mamykin, A. A. Kulesh, Fedor L. Barkov, Y. A. Konstantinov, D. P. Sokol’chik and Vladimir Pervadchuk
Computation 2024, 12(8), 167; https://doi.org/10.3390/computation12080167 - 18 Aug 2024
Viewed by 4854
Abstract
In the contemporary era, dizziness is a prevalent ailment among patients. It can be caused by either vestibular neuritis or a stroke. Given the lack of diagnostic utility of instrumental methods in acute isolated vertigo, the differentiation of vestibular neuritis and stroke is [...] Read more.
In the contemporary era, dizziness is a prevalent ailment among patients. It can be caused by either vestibular neuritis or a stroke. Given the lack of diagnostic utility of instrumental methods in acute isolated vertigo, the differentiation of vestibular neuritis and stroke is primarily clinical. As a part of the initial differential diagnosis, the physician focuses on the characteristics of nystagmus and the results of the video head impulse test (vHIT). Instruments for accurate vHIT are costly and are often utilized exclusively in healthcare settings. The objective of this paper is to review contemporary methodologies for accurately detecting the position of pupil centers in both eyes of a patient and for precisely extracting their coordinates. Additionally, the paper describes methods for accurately determining the head rotation angle under diverse imaging and lighting conditions. Furthermore, the suitability of these methods for vHIT is being evaluated. We assume the maximum allowable error is 0.005 radians per frame to detect pupils’ coordinates or 0.3 degrees per frame while detecting the head position. We found that for such conditions, the most suitable approaches for head posture detection are deep learning (including LSTM networks), search by template matching, linear regression of EMG sensor data, and optical fiber sensor usage. The most relevant approaches for pupil localization for our medical tasks are deep learning, geometric transformations, decision trees, and RASNAC. This study might assist in the identification of a number of approaches that can be employed in the future to construct a high-accuracy system for vHIT based on a smartphone or a home computer, with subsequent signal processing and initial diagnosis. Full article
(This article belongs to the Special Issue Deep Learning Applications in Medical Imaging)
Show Figures

Figure 1

22 pages, 18174 KiB  
Article
Research on Pupil Center Localization Detection Algorithm with Improved YOLOv8
by Kejuan Xue, Jinsong Wang and Hao Wang
Appl. Sci. 2024, 14(15), 6661; https://doi.org/10.3390/app14156661 - 30 Jul 2024
Cited by 2 | Viewed by 1650
Abstract
Addressing issues such as low localization accuracy, poor robustness, and long average localization time in pupil center localization algorithms, an improved YOLOv8 network-based pupil center localization algorithm is proposed. This algorithm incorporates a dual attention mechanism into the YOLOv8n backbone network, which simultaneously [...] Read more.
Addressing issues such as low localization accuracy, poor robustness, and long average localization time in pupil center localization algorithms, an improved YOLOv8 network-based pupil center localization algorithm is proposed. This algorithm incorporates a dual attention mechanism into the YOLOv8n backbone network, which simultaneously attends to global contextual information of input data while reducing dependence on specific regions. This improves the problem of difficult pupil localization detection due to occlusions such as eyelashes and eyelids, enhancing the model’s robustness. Additionally, atrous convolutions are introduced in the encoding section, which reduce the network model while improving the model’s detection speed. The use of the Focaler-IoU loss function, by focusing on different regression samples, can improve the performance of detectors in various detection tasks. The performance of the improved Yolov8n algorithm was 0.99971, 1, 0.99611, and 0.96495 in precision, recall, MAP50, and mAP50-95, respectively. Moreover, the improved YOLOv8n algorithm reduced the model parameters by 7.18% and the computational complexity by 10.06%, while enhancing the environmental anti-interference ability and robustness, and shortening the localization time, improving real-time detection. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

13 pages, 5561 KiB  
Article
Double-Center-Based Iris Localization and Segmentation in Cooperative Environment with Visible Illumination
by Jiangang Li and Xin Feng
Sensors 2023, 23(4), 2238; https://doi.org/10.3390/s23042238 - 16 Feb 2023
Cited by 4 | Viewed by 2675
Abstract
Iris recognition has been considered as one of the most accurate and reliable biometric technologies, and it is widely used in security applications. Iris segmentation and iris localization, as important preprocessing tasks for iris biometrics, jointly determine the valid iris part of the [...] Read more.
Iris recognition has been considered as one of the most accurate and reliable biometric technologies, and it is widely used in security applications. Iris segmentation and iris localization, as important preprocessing tasks for iris biometrics, jointly determine the valid iris part of the input eye image; however, iris images that have been captured in user non-cooperative and visible illumination environments often suffer from adverse noise (e.g., light reflection, blurring, and glasses occlusion), which challenges many existing segmentation-based parameter-fitting localization methods. To address this problem, we propose a novel double-center-based end-to-end iris localization and segmentation network. Different from many previous iris localization methods, which use massive post-process methods (e.g., integro-differential operator-based or circular Hough transforms-based) on iris or contour mask to fit the inner and outer circles, our method directly predicts the inner and outer circles of the iris on the feature map. In our method, an anchor-free center-based double-circle iris-localization network and an iris mask segmentation module are designed to directly detect the circle boundary of the pupil and iris, and segment the iris region in an end-to-end framework. To facilitate efficient training, we propose a concentric sampling strategy according to the center distribution of the inner and outer iris circles. Extensive experiments on the four challenging iris data sets show that our method achieves excellent iris-localization performance; in particular, it achieves 84.02% box IoU and 89.15% mask IoU on NICE-II. On the three sub-datasets of MICHE, our method achieves 74.06% average box IoU, surpassing the existing methods by 4.64%. Full article
(This article belongs to the Special Issue Sensors for Biometric Recognition and Authentication)
Show Figures

Figure 1

19 pages, 4997 KiB  
Article
A New Species of Proctoporus (Reptilia, Gymnophthalmidae, Cercosaurinae) from the Puna of the Otishi National Park in Peru
by Edgar Lehr, Juan C. Cusi, Maura I. Fernandez, Ricardo J. Vera and Alessandro Catenazzi
Taxonomy 2023, 3(1), 10-28; https://doi.org/10.3390/taxonomy3010002 - 31 Dec 2022
Cited by 1 | Viewed by 9035
Abstract
We describe a new species of Proctoporus from the scientifically unexplored southern sector of the Otishi National Park (Region Cusco) in Peru, on the basis of molecular and morphological characters. Seven type specimens were obtained from six localities between 3241–3269 m a.s.l. within [...] Read more.
We describe a new species of Proctoporus from the scientifically unexplored southern sector of the Otishi National Park (Region Cusco) in Peru, on the basis of molecular and morphological characters. Seven type specimens were obtained from six localities between 3241–3269 m a.s.l. within a radius of ca. 1.5 km in a Puna valley. Nine adult specimens (four males, five females) from Chiquintirca (Region Ayacucho, ca. 85 km airline from the type locality) are considered referred specimens. Males of the new species have a snout–vent length of 41.3–53.9 mm (x¯ = 46.7, n = 6), females have a snout–vent length of 43.6–52.6 mm (x¯ = 48.1, n = 8). The new species has dorsal scales striated, four supraoculars, four anterior supralabials, loreal and prefrontal scales absent, two pairs of genials (rarely one or three), three rows of pregulars, and five to seven femoral pores in males (absent in females). Sexual dimorphism is evident in the ventral coloration: males have neck, chest, and belly dark gray to black, whereas females have neck, chest, and belly pale gray with a diffuse dark gray fleck in the center of each scale, and an orange iris with a fringed pupil in both sexes. Full article
Show Figures

Figure 1

13 pages, 4830 KiB  
Article
Robust Iris-Localization Algorithm in Non-Cooperative Environments Based on the Improved YOLO v4 Model
by Qi Xiong, Xinman Zhang, Xingzhu Wang, Naosheng Qiao and Jun Shen
Sensors 2022, 22(24), 9913; https://doi.org/10.3390/s22249913 - 16 Dec 2022
Cited by 9 | Viewed by 2836
Abstract
Iris localization in non-cooperative environments is challenging and essential for accurate iris recognition. Motivated by the traditional iris-localization algorithm and the robustness of the YOLO model, we propose a novel iris-localization algorithm. First, we design a novel iris detector with a modified you [...] Read more.
Iris localization in non-cooperative environments is challenging and essential for accurate iris recognition. Motivated by the traditional iris-localization algorithm and the robustness of the YOLO model, we propose a novel iris-localization algorithm. First, we design a novel iris detector with a modified you only look once v4 (YOLO v4) model. We can approximate the position of the pupil center. Then, we use a modified integro-differential operator to precisely locate the iris inner and outer boundaries. Experiment results show that iris-detection accuracy can reach 99.83% with this modified YOLO v4 model, which is higher than that of a traditional YOLO v4 model. The accuracy in locating the inner and outer boundary of the iris without glasses can reach 97.72% at a short distance and 98.32% at a long distance. The locating accuracy with glasses can obtained at 93.91% and 84%, respectively. It is much higher than the traditional Daugman’s algorithm. Extensive experiments conducted on multiple datasets demonstrate the effectiveness and robustness of our method for iris localization in non-cooperative environments. Full article
(This article belongs to the Collection Deep Learning in Biomedical Informatics and Healthcare)
Show Figures

Figure 1

15 pages, 7878 KiB  
Article
A Human-Computer Control System Based on Intelligent Recognition of Eye Movements and Its Application in Wheelchair Driving
by Wenping Luo, Jianting Cao, Kousuke Ishikawa and Dongying Ju
Multimodal Technol. Interact. 2021, 5(9), 50; https://doi.org/10.3390/mti5090050 - 28 Aug 2021
Cited by 27 | Viewed by 6060
Abstract
This paper presents a practical human-computer interaction system for wheelchair motion through eye tracking and eye blink detection. In this system, the pupil in the eye image has been extracted after binarization, and the center of the pupil was localized to capture the [...] Read more.
This paper presents a practical human-computer interaction system for wheelchair motion through eye tracking and eye blink detection. In this system, the pupil in the eye image has been extracted after binarization, and the center of the pupil was localized to capture the trajectory of eye movement and determine the direction of eye gaze. Meanwhile, convolutional neural networks for feature extraction and classification of open-eye and closed-eye images have been built, and machine learning was performed by extracting features from multiple individual images of open-eye and closed-eye states for input to the system. As an application of this human-computer interaction control system, experimental validation was carried out on a modified wheelchair and the proposed method proved to be effective and reliable based on the experimental results. Full article
Show Figures

Figure 1

17 pages, 6050 KiB  
Article
A Pupil Segmentation Algorithm Based on Fuzzy Clustering of Distributed Information
by Kemeng Bai, Jianzhong Wang and Hongfeng Wang
Sensors 2021, 21(12), 4209; https://doi.org/10.3390/s21124209 - 19 Jun 2021
Cited by 7 | Viewed by 3843
Abstract
Pupil segmentation is critical for line-of-sight estimation based on the pupil center method. Due to noise and individual differences in human eyes, the quality of eye images often varies, making pupil segmentation difficult. In this paper, we propose a pupil segmentation method based [...] Read more.
Pupil segmentation is critical for line-of-sight estimation based on the pupil center method. Due to noise and individual differences in human eyes, the quality of eye images often varies, making pupil segmentation difficult. In this paper, we propose a pupil segmentation method based on fuzzy clustering of distributed information, which first preprocesses the original eye image to remove features such as eyebrows and shadows and highlight the pupil area; then the Gaussian model is introduced into global distribution information to enhance the classification fuzzy affiliation for the local neighborhood, and an adaptive local window filter that fuses local spatial and intensity information is proposed to suppress the noise in the image and preserve the edge information of the pupil details. Finally, the intensity histogram of the filtered image is used for fast clustering to obtain the clustering center of the pupil, and this binarization process is used to segment the pupil for the next pupil localization. Experimental results show that the method has high segmentation accuracy, sensitivity, and specificity. It can accurately segment the pupil when there are interference factors such as light spots, light reflection, and contrast difference at the edge of the pupil, which is an important contribution to improving the stability and accuracy of the line-of-sight tracking. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 3314 KiB  
Article
Low-Complexity Pupil Tracking for Sunglasses-Wearing Faces for Glasses-Free 3D HUDs
by Dongwoo Kang and Hyun Sung Chang
Appl. Sci. 2021, 11(10), 4366; https://doi.org/10.3390/app11104366 - 11 May 2021
Cited by 7 | Viewed by 3385
Abstract
This study proposes a pupil-tracking method applicable to drivers both with and without sunglasses on, which has greater compatibility with augmented reality (AR) three-dimensional (3D) head-up displays (HUDs). Performing real-time pupil localization and tracking is complicated by drivers wearing facial accessories such as [...] Read more.
This study proposes a pupil-tracking method applicable to drivers both with and without sunglasses on, which has greater compatibility with augmented reality (AR) three-dimensional (3D) head-up displays (HUDs). Performing real-time pupil localization and tracking is complicated by drivers wearing facial accessories such as masks, caps, or sunglasses. The proposed method fulfills two key requirements: low complexity and algorithm performance. Our system assesses both bare and sunglasses-wearing faces by first classifying images according to these modes and then assigning the appropriate eye tracker. For bare faces with unobstructed eyes, we applied our previous regression-algorithm-based method that uses scale-invariant feature transform features. For eyes occluded by sunglasses, we propose an eye position estimation method: our eye tracker uses nonoccluded face area tracking and a supervised regression-based pupil position estimation method to locate pupil centers. Experiments showed that the proposed method achieved high accuracy and speed, with a precision error of <10 mm in <5 ms for bare and sunglasses-wearing faces for both a 2.5 GHz CPU and a commercial 2.0 GHz CPU vehicle-embedded system. Coupled with its performance, the low CPU consumption (10%) demonstrated by the proposed algorithm highlights its promise for implementation in AR 3D HUD systems. Full article
(This article belongs to the Special Issue Machine Perception in Intelligent Systems)
Show Figures

Figure 1

18 pages, 22745 KiB  
Article
Motion Tracking of Iris Features to Detect Small Eye Movements
by Aayush K. Chaudhary and Jeff B. Pelz
J. Eye Mov. Res. 2019, 12(6), 1-18; https://doi.org/10.16910/jemr.12.6.4 - 5 Apr 2019
Cited by 14 | Viewed by 180
Abstract
The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current [...] Read more.
The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a new video-based eye tracking methodology which can reliably detect small eye movements over 0.2 degrees (12 arcmins) with very high confidence. Our method tracks the motion of iris features to estimate velocity rather than position, yielding a better record of microsaccades. We provide a more robust, detailed record of miniature eye movements by relying on more stable, higher-order features (such as local features of iris texture) instead of lower-order features (such as pupil center and corneal reflection), which are sensitive to noise and drift. Full article
Show Figures

Figure 1

20 pages, 1416 KiB  
Article
In the Eye of the Deceiver: Analyzing Eye Movements as a Cue to Deception
by Diana Borza, Razvan Itu and Radu Danescu
J. Imaging 2018, 4(10), 120; https://doi.org/10.3390/jimaging4100120 - 16 Oct 2018
Cited by 22 | Viewed by 10124
Abstract
Deceit occurs in daily life and, even from an early age, children can successfully deceive their parents. Therefore, numerous book and psychological studies have been published to help people decipher the facial cues to deceit. In this study, we tackle the problem of [...] Read more.
Deceit occurs in daily life and, even from an early age, children can successfully deceive their parents. Therefore, numerous book and psychological studies have been published to help people decipher the facial cues to deceit. In this study, we tackle the problem of deceit detection by analyzing eye movements: blinks, saccades and gaze direction. Recent psychological studies have shown that the non-visual saccadic eye movement rate is higher when people lie. We propose a fast and accurate framework for eye tracking and eye movement recognition and analysis. The proposed system tracks the position of the iris, as well as the eye corners (the outer shape of the eye). Next, in an offline analysis stage, the trajectory of these eye features is analyzed in order to recognize and measure various cues which can be used as an indicator of deception: the blink rate, the gaze direction and the saccadic eye movement rate. On the task of iris center localization, the method achieves within pupil localization in 91.47% of the cases. For blink localization, we obtained an accuracy of 99.3% on the difficult EyeBlink8 dataset. In addition, we proposed a novel metric, the normalized blink rate deviation to stop deceitful behavior based on blink rate. Using this metric and a simple decision stump, the deceitful answers from the Silesian Face database were recognized with an accuracy of 96.15%. Full article
Show Figures

Figure 1

24 pages, 7785 KiB  
Article
Real-Time Detection and Measurement of Eye Features from Color Images
by Diana Borza, Adrian Sergiu Darabant and Radu Danescu
Sensors 2016, 16(7), 1105; https://doi.org/10.3390/s16071105 - 16 Jul 2016
Cited by 19 | Viewed by 11043
Abstract
The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the [...] Read more.
The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the iris radius, and the external shape of the eye. These features are extracted using a multistage algorithm. On the first stage the pupil center is localized using a fast circular symmetry detector and the iris radius is computed using radial gradient projections, and on the second stage the external shape of the eye (of the eyelids) is determined through a Monte Carlo sampling framework based on both color and shape information. Extensive experiments performed on a different dataset demonstrate the effectiveness of our approach. In addition, this work provides eye annotation data for a publicly-available database. Full article
(This article belongs to the Special Issue Non-Contact Sensing)
Show Figures

Graphical abstract

Back to TopTop