Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (117)

Search Parameters:
Keywords = meanings of gaze

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
43 pages, 190510 KiB  
Article
From Viewing to Structure: A Computational Framework for Modeling and Visualizing Visual Exploration
by Kuan-Chen Chen, Chang-Franw Lee, Teng-Wen Chang, Cheng-Gang Wang and Jia-Rong Li
Appl. Sci. 2025, 15(14), 7900; https://doi.org/10.3390/app15147900 - 15 Jul 2025
Viewed by 277
Abstract
This study proposes a computational framework that transforms eye-tracking analysis from statistical description to cognitive structure modeling, aiming to reveal the organizational features embedded in the viewing process. Using the designers’ observation of a traditional Chinese landscape painting as an example, the study [...] Read more.
This study proposes a computational framework that transforms eye-tracking analysis from statistical description to cognitive structure modeling, aiming to reveal the organizational features embedded in the viewing process. Using the designers’ observation of a traditional Chinese landscape painting as an example, the study draws on the goal-oriented nature of design thinking to suggest that such visual exploration may exhibit latent structural tendencies, reflected in patterns of fixation and transition. Rather than focusing on traditional fixation hotspots, our four-dimensional framework (Region, Relation, Weight, Time) treats viewing behavior as structured cognitive networks. To operationalize this framework, we developed a data-driven computational approach that integrates fixation coordinate transformation, K-means clustering, extremum point detection, and linear interpolation. These techniques identify regions of concentrated visual attention and define their spatial boundaries, allowing for the modeling of inter-regional relationships and cognitive organization among visual areas. An adaptive buffer zone method is further employed to quantify the strength of connections between regions and to delineate potential visual nodes and transition pathways. Three design-trained participants were invited to observe the same painting while performing a think-aloud task, with one participant selected for the detailed demonstration of the analytical process. The framework’s applicability across different viewers was validated through consistent structural patterns observed across all three participants, while simultaneously revealing individual differences in their visual exploration strategies. These findings demonstrate that the proposed framework provides a replicable and generalizable method for systematically analyzing viewing behavior across individuals, enabling rapid identification of both common patterns and individual differences in visual exploration. This approach opens new possibilities for discovering structural organization within visual exploration data and analyzing goal-directed viewing behaviors. Although this study focuses on method demonstration, it proposes a preliminary hypothesis that designers’ gaze structures are significantly more clustered and hierarchically organized than those of novices, providing a foundation for future confirmatory testing. Full article
(This article belongs to the Special Issue New Insights into Computer Vision and Graphics)
Show Figures

Figure 1

19 pages, 1779 KiB  
Article
Through the Eyes of the Viewer: The Cognitive Load of LLM-Generated vs. Professional Arabic Subtitles
by Hussein Abu-Rayyash and Isabel Lacruz
J. Eye Mov. Res. 2025, 18(4), 29; https://doi.org/10.3390/jemr18040029 - 14 Jul 2025
Viewed by 474
Abstract
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic [...] Read more.
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic subtitles impose with that of professional human translations among 82 native Arabic speakers who viewed a 10 min episode (“Syria”) from the BBC comedy drama series State of the Union. Participants were randomly assigned to view the same episode with either professionally produced Arabic subtitles (Amazon Prime’s human translations) or machine-generated GPT-4o Arabic subtitles. In a between-subjects design, with English proficiency entered as a moderator, we collected fixation count, mean fixation duration, gaze distribution, and attention concentration (K-coefficient) as indices of cognitive processing. GPT-4o subtitles raised cognitive load on every metric; viewers produced 48% more fixations in the subtitle area, recorded 56% longer fixation durations, and spent 81.5% more time reading the automated subtitles than the professional subtitles. The subtitle area K-coefficient tripled (0.10 to 0.30), a shift from ambient scanning to focal processing. Viewers with advanced English proficiency showed the largest disruptions, which indicates that higher linguistic competence increases sensitivity to subtle translation shortcomings. These results challenge claims that large language models (LLMs) lighten viewer burden; despite fluent surface quality, GPT-4o subtitles demand far more cognitive resources than expert human subtitles and therefore reinforce the need for human oversight in audiovisual translation (AVT) and media accessibility. Full article
Show Figures

Figure 1

22 pages, 792 KiB  
Article
Childhood Heritage Languages: A Tangier Case Study
by Ariadna Saiz Mingo
Languages 2025, 10(7), 168; https://doi.org/10.3390/languages10070168 - 9 Jul 2025
Viewed by 388
Abstract
Through the testimony of a Tangier female citizen who grew up in the “prolific multilingual Spanish-French-Darija context of international Tangier”, this article analyzes the web of beliefs projected onto both the inherited and local languages within her linguistic repertoire. Starting from the daily [...] Read more.
Through the testimony of a Tangier female citizen who grew up in the “prolific multilingual Spanish-French-Darija context of international Tangier”, this article analyzes the web of beliefs projected onto both the inherited and local languages within her linguistic repertoire. Starting from the daily realities in which she was immersed and the social networks that she formed, we focus on the representations of communication and her affective relationship with the host societies. The analysis starts from the most immediate domestic context in which Spanish, in its variant Jaquetía (a dialect of Judeo-Spanish language spoken by the Sephardic Jews of northern Morocco) was displaced by French as the language of instruction. After an initial episode of reversible attrition, we witnessed various phenomena of translanguaging within the host society. Following the binomial “emotion-interrelational space”, we seek to discern the affective contexts associated with the languages of a multilingual childhood, and which emotional links are vital for maintaining inherited ones. This shift towards the valuation of the affective culture implies a reorientation of the gaze towards everyday experiences as a means of research in contexts of language contact. Full article
Show Figures

Figure 1

18 pages, 4185 KiB  
Article
An Empirical Study on Pointing Gestures Used in Communication in Household Settings
by Tymon Kukier, Alicja Wróbel, Barbara Sienkiewicz, Julia Klimecka, Antonio Galiza Cerdeira Gonzalez, Paweł Gajewski and Bipin Indurkhya
Electronics 2025, 14(12), 2346; https://doi.org/10.3390/electronics14122346 - 8 Jun 2025
Viewed by 489
Abstract
Gestures play an integral role in human communication. Our research aims to develop a gesture understanding system that allows for better interpretation of human instructions in household robotics settings. We conducted an experiment with 34 participants who used pointing gestures to teach concepts [...] Read more.
Gestures play an integral role in human communication. Our research aims to develop a gesture understanding system that allows for better interpretation of human instructions in household robotics settings. We conducted an experiment with 34 participants who used pointing gestures to teach concepts to an assistant. Gesture data were analyzed using manual annotations (MAXQDA) and the computational methods of pose estimation and k-means clustering. The study revealed that participants tend to maintain consistent pointing styles, with one-handed pointing and index finger gestures being the most common. Gaze and pointing often co-occur, as do leaning forward and pointing. Using our gesture categorization algorithm, we analyzed gesture information values. As the experiment progressed, the information value of gestures remained stable, although the trends varied between participants and were associated with factors such as age and gender. These findings underscore the need for gesture recognition systems to balance generalization with personalization for more effective human–robot interaction. Full article
(This article belongs to the Special Issue Applications of Computer Vision, 3rd Edition)
Show Figures

Figure 1

21 pages, 1696 KiB  
Article
Cognitive Insights into Museum Engagement: A Mobile Eye-Tracking Study on Visual Attention Distribution and Learning Experience
by Wenjia Shi, Kenta Ono and Liang Li
Electronics 2025, 14(11), 2208; https://doi.org/10.3390/electronics14112208 - 29 May 2025
Viewed by 873
Abstract
Recent advancements in Mobile Eye-Tracking (MET) technology have enabled the detailed examination of visitors’ embodied visual behaviors as they navigate exhibition spaces. This study employs MET to investigate visual attention patterns in an archeological museum, with a particular focus on identifying “hotspots” of [...] Read more.
Recent advancements in Mobile Eye-Tracking (MET) technology have enabled the detailed examination of visitors’ embodied visual behaviors as they navigate exhibition spaces. This study employs MET to investigate visual attention patterns in an archeological museum, with a particular focus on identifying “hotspots” of attention. Through a multi-phase research design, we explore the relationship between visitor gaze behavior and museum learning experiences in a real-world setting. Using three key eye movement metrics—Time to First Fixation (TFF), Average Fixation Duration (AFD), and Total Fixation Duration (TFD), we analyze the distribution of visual attention across predefined Areas of Interest (AOIs). Time to First Fixation varied substantially by element, occurring most rapidly for artifacts and most slowly for labels, while video screens showed the shortest mean latency but greatest inter-individual variability, reflecting sequential exploration and heterogeneous strategies toward dynamic versus static media. Total Fixation Duration was highest for video screens and picture panels, intermediate yet variable for artifacts and text panels, and lowest for labels, indicating that dynamic and pictorial content most effectively sustain attention. Finally, Average Fixation Duration peaked on artifacts and labels, suggesting in-depth processing of descriptive elements, and it was shortest on video screens, consistent with rapid, distributed fixations in response to dynamic media. The results provide novel insights into the spatial and contextual factors that influence visitor engagement and knowledge acquisition in museum environments. Based on these findings, we discuss strategic implications for museum research and propose practical recommendations for optimizing exhibition design to enhance visitor experience and learning outcomes. Full article
(This article belongs to the Special Issue New Advances in Human-Robot Interaction)
Show Figures

Figure 1

17 pages, 18945 KiB  
Article
Collaborative Robot Control Based on Human Gaze Tracking
by Francesco Di Stefano, Alice Giambertone, Laura Salamina, Matteo Melchiorre and Stefano Mauro
Sensors 2025, 25(10), 3103; https://doi.org/10.3390/s25103103 - 14 May 2025
Viewed by 595
Abstract
Gaze tracking is gaining relevance in collaborative robotics as a means to enhance human–machine interaction by enabling intuitive and non-verbal communication. This study explores the integration of human gaze into collaborative robotics by demonstrating the possibility of controlling a robotic manipulator with a [...] Read more.
Gaze tracking is gaining relevance in collaborative robotics as a means to enhance human–machine interaction by enabling intuitive and non-verbal communication. This study explores the integration of human gaze into collaborative robotics by demonstrating the possibility of controlling a robotic manipulator with a practical and non-intrusive setup made up of a vision system and gaze-tracking software. After presenting a comparison between the major available systems on the market, OpenFace 2.0 was selected as the primary gaze-tracking software and integrated with a UR5 collaborative robot through a MATLAB-based control framework. Validation was conducted through real-world experiments, analyzing the effects of raw and filtered gaze data on system accuracy and responsiveness. The results indicate that gaze tracking can effectively guide robot motion, though signal processing significantly impacts responsiveness and control precision. This work establishes a foundation for future research on gaze-assisted robotic control, highlighting its potential benefits and challenges in enhancing human–robot collaboration. Full article
(This article belongs to the Special Issue Advanced Robotic Manipulators and Control Applications)
Show Figures

Figure 1

42 pages, 47882 KiB  
Article
Product Engagement Detection Using Multi-Camera 3D Skeleton Reconstruction and Gaze Estimation
by Matus Tanonwong, Yu Zhu, Naoya Chiba and Koichi Hashimoto
Sensors 2025, 25(10), 3031; https://doi.org/10.3390/s25103031 - 11 May 2025
Viewed by 821
Abstract
Product engagement detection in retail environments is critical for understanding customer preferences through nonverbal cues such as gaze and hand movements. This study presents a system leveraging a 360-degree top-view fisheye camera combined with two perspective cameras, the only sensors required for deployment, [...] Read more.
Product engagement detection in retail environments is critical for understanding customer preferences through nonverbal cues such as gaze and hand movements. This study presents a system leveraging a 360-degree top-view fisheye camera combined with two perspective cameras, the only sensors required for deployment, effectively capturing subtle interactions even under occlusion or distant camera setups. Unlike conventional image-based gaze estimation methods that are sensitive to background variations and require capturing a person’s full appearance, raising privacy concerns, our approach utilizes a novel Transformer-based encoder operating directly on 3D skeletal keypoints. This innovation significantly reduces privacy risks by avoiding personal appearance data and benefits from ongoing advancements in accurate skeleton estimation techniques. Experimental evaluation in a simulated retail environment demonstrates that our method effectively identifies critical gaze-object and hand-object interactions, reliably detecting customer engagement prior to product selection. Despite yielding slightly higher mean angular errors in gaze estimation compared to a recent image-based method, the Transformer-based model achieves comparable performance in gaze-object detection. Its robustness, generalizability, and inherent privacy preservation make it particularly suitable for deployment in practical retail scenarios such as convenience stores, supermarkets, and shopping malls, highlighting its superiority in real-world applicability. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2025)
Show Figures

Figure 1

25 pages, 1517 KiB  
Article
Towards Structured Gaze Data Classification: The Gaze Data Clustering Taxonomy (GCT)
by Yahdi Siradj, Kiki Maulana Adhinugraha and Eric Pardede
Multimodal Technol. Interact. 2025, 9(5), 42; https://doi.org/10.3390/mti9050042 - 3 May 2025
Viewed by 654
Abstract
Gaze data analysis plays a crucial role in understanding human visual attention and behaviour. However, raw gaze data is often noisy and lacks inherent structure, making interpretation challenging. Therefore, preprocessing techniques such as classification are essential to extract meaningful patterns and improve the [...] Read more.
Gaze data analysis plays a crucial role in understanding human visual attention and behaviour. However, raw gaze data is often noisy and lacks inherent structure, making interpretation challenging. Therefore, preprocessing techniques such as classification are essential to extract meaningful patterns and improve the reliability of gaze-based analysis. This study introduces the Gaze Data Clustering Taxonomy (GCT), a novel approach that categorises gaze data into structured clusters to improve its reliability and interpretability. GCT classifies gaze data based on cluster count, target presence, and spatial–temporal relationships, allowing for more precise gaze-to-target association. We utilise several machine learning techniques, such as k-NN, k-Means, and DBScan, to apply the taxonomy to a Random Saccade Task dataset, demonstrating its effectiveness in gaze classification. Our findings highlight how clustering provides a structured approach to gaze data preprocessing by distinguishing meaningful patterns from unreliable data. Full article
Show Figures

Figure 1

30 pages, 52057 KiB  
Article
A Study on Correlation of Depth Fixation with Distance Between Dual Purkinje Images and Pupil Size
by Jinyeong Ahn and Eui Chul Lee
Electronics 2025, 14(9), 1799; https://doi.org/10.3390/electronics14091799 - 28 Apr 2025
Viewed by 427
Abstract
In recent times, 3D eye tracking methods have been actively studied to utilize gaze information in various applications. As a result, there is growing interest in gaze depth estimation techniques. This study introduces a monocular method for estimating gaze depth using DPI distance [...] Read more.
In recent times, 3D eye tracking methods have been actively studied to utilize gaze information in various applications. As a result, there is growing interest in gaze depth estimation techniques. This study introduces a monocular method for estimating gaze depth using DPI distance and pupil size. We acquired right eye images from eleven subjects and at ten gaze depth levels ranging from 15 cm to 60 cm at intervals of 5 cm. We used a camera equipped with an infrared LED to capture the images. We applied a contour-based algorithm to detect the first Purkinje image and pupil, then used a template matching algorithm for the fourth Purkinje image. Using the detected features, we calculated the pupil size and DPI distance. We trained a multiple linear regression model on data from eight subjects, achieving an R2 value of 0.71 and a root mean squared error (RMSE) of 7.69 cm. This result indicates an approximate 3.15% reduction in error rate compared to the general linear regression model. Based on the results, we derived the following equation: depth fixation = 20.746 × DPI distance + 5.223 × pupil size + 16.495 × (DPI distance × pupil size) + 13.880. Our experiments confirmed that gaze depth can be effectively estimated from monocular images using DPI distance and pupil size. Full article
Show Figures

Figure 1

19 pages, 865 KiB  
Systematic Review
A Comparative Study on the Integration of Eye-Tracking in Recommender Systems
by Osamah M. Al-Omair
Sensors 2025, 25(9), 2692; https://doi.org/10.3390/s25092692 - 24 Apr 2025
Viewed by 805
Abstract
This study investigated the integration of eye tracking technologies in recommender systems, focusing on their potential to enhance personalization, accuracy, and user engagement. Eye tracking metrics, including fixation duration and gaze patterns, provide a non-intrusive means of capturing real-time user preferences, which can [...] Read more.
This study investigated the integration of eye tracking technologies in recommender systems, focusing on their potential to enhance personalization, accuracy, and user engagement. Eye tracking metrics, including fixation duration and gaze patterns, provide a non-intrusive means of capturing real-time user preferences, which can lead to more effective recommendations. Through a comprehensive comparison of current studies, this paper synthesizes findings on the impact of eye tracking across application domains such as e-commerce and media. The results indicate notable improvements in recommendation accuracy with the use of gaze-based feedback. However, limitations persist, including reliance on controlled environments, limited sample diversity, and the high cost of specialized eye tracking equipment. To address these challenges, this paper proposes a structured framework that systematically integrates eye tracking data into real-time recommendation generation. The framework consists of an Eye Tracking Module, a Preferences Module, and a Recommender Module, creating an adaptive recommendation process that continuously refines user preferences based on implicit gaze-based interactions. This novel approach enhances the adaptability of recommender systems by minimizing reliance on static user profiles. Future research directions include the integration of additional behavioral indicators and the development of accessible eye tracking tools to broaden real-world impact. Eye tracking shows substantial promise in advancing recommender systems but requires further refinement to achieve practical, scalable applications across diverse contexts. Full article
(This article belongs to the Special Issue Multi-Modal Data Sensing and Processing)
Show Figures

Figure 1

20 pages, 4055 KiB  
Article
An Efficient Gaze Control System for Kiosk-Based Embodied Conversational Agents in Multi-Party Conversations
by Sunghun Jung, Junyeong Kum and Myungho Lee
Electronics 2025, 14(8), 1592; https://doi.org/10.3390/electronics14081592 - 15 Apr 2025
Viewed by 697
Abstract
The adoption of kiosks in public spaces is steadily increasing, with a trend toward providing more natural user experiences through embodied conversational agents (ECAs). To achieve human-like interactions, ECAs should be able to appropriately gaze at the speaker. However, kiosks in public spaces [...] Read more.
The adoption of kiosks in public spaces is steadily increasing, with a trend toward providing more natural user experiences through embodied conversational agents (ECAs). To achieve human-like interactions, ECAs should be able to appropriately gaze at the speaker. However, kiosks in public spaces often face challenges, such as ambient noise and overlapping speech from multiple people, making it difficult to accurately identify the speaker and direct the ECA’s gaze accordingly. In this paper, we propose a lightweight gaze control system that is designed to operate effectively within the resource constraints of kiosks and the noisy conditions common in public spaces. We first developed a speaker detection model that identifies the active speaker in challenging noise conditions using only a single camera and microphone. The proposed model achieved a 91.6% mean Average Precision (mAP) in active speaker detection and a 0.6% improvement over the state-of-the-art lightweight model (Light ASD) (as evaluated on the noise-augmented AVA-Speaker Detection dataset), while maintaining real-time performance. Building on this, we developed a gaze control system for ECAs that detects the dominant speaker in a group and directs the ECA’s gaze toward them using an algorithm inspired by real human turn-taking behavior. To evaluate the system’s performance, we conducted a user study with 30 participants, comparing the system to a baseline condition (i.e., a fixed forward gaze) and a human-controlled gaze. The results showed statistically significant improvements in social/co-presence and gaze naturalness compared to the baseline, with no significant difference between the system and human-controlled gazes. This suggests that our system achieves a level of social presence and gaze naturalness comparable to a human-controlled gaze. The participants’ feedback, which indicated no clear distinction between human- and model-controlled conditions, further supports the effectiveness of our approach. Full article
(This article belongs to the Special Issue AI Synergy: Vision, Language, and Modality)
Show Figures

Figure 1

19 pages, 3977 KiB  
Article
To Stitch or Not to Stitch, That Is the Question: Multi-Gaze Eye Topography Stitching Versus Single-Shot Profilometry
by Wen-Pin Lin, Lo-Yu Wu, Wei-Ren Lin, Lynn White, Richard Wu, Arwa Fathy, Rami Alanazi, Jay Davies and Ahmed Abass
Photonics 2025, 12(4), 318; https://doi.org/10.3390/photonics12040318 - 28 Mar 2025
Viewed by 529
Abstract
Purpose: To evaluate whether corneal topography map stitching can fully substitute the traditional single-shot capture methods in clinical settings. Methods: This record review study involved the measurement of corneal surfaces from 38 healthy subjects using two instruments: the Medmont Meridia, which employs a [...] Read more.
Purpose: To evaluate whether corneal topography map stitching can fully substitute the traditional single-shot capture methods in clinical settings. Methods: This record review study involved the measurement of corneal surfaces from 38 healthy subjects using two instruments: the Medmont Meridia, which employs a stitching composite topography method, and the Eye Surface Profiler (ESP), a single-shot measurement device. Data were processed separately for right and left eyes, with multiple gaze directions captured by the Medmont device. Surface registration and geometric transformation estimation, including neighbouring cubic interpolation, were applied to assess the accuracy of stitched maps compared to single-shot measurements. Results: The study evaluated eye rotation angles and surface alignment between Medmont topography across various gaze directions and ESP scans. Close eye rotations were found in the right-gaze, left-gaze and up-gaze directions, with rotation angles of around 8°; however, the down-gaze angle was around 15°, almost twice other gaze rotation angles. Root mean squared error (RMSE) analysis revealed notable discrepancies, particularly in the right-, left-, and up-gaze directions, with errors reaching up to 98 µm compared to ESP scans. Additionally, significance analyses showed that surface area ratios highlighted considerable differences, especially in the up-gaze direction, where discrepancies reached 70% for both right and left eyes. Conclusions: Despite potential benefits, the findings highlight a significant mismatch between stitched and single-shot measured surfaces due to digital processing artefacts. Findings suggest that stitching techniques, in their current form, are not yet ready to substitute single-shot topography measurements fully. Although stitching helps fit large-diameter contact lenses, care should be taken regarding the central area, especially if utilising the stitched data for optimising optics or wavefront analysis. Full article
(This article belongs to the Special Issue Recent Advances in Biomedical Optics and Biophotonics)
Show Figures

Figure 1

15 pages, 531 KiB  
Article
Differences in Gaze Behavior Between Male and Female Elite Handball Goalkeepers During Penalty Throws
by Wojciech Jedziniak, Krystian Panek, Piotr Lesiakowski, Beata Florkiewicz and Teresa Zwierko
Brain Sci. 2025, 15(3), 312; https://doi.org/10.3390/brainsci15030312 - 15 Mar 2025
Viewed by 937
Abstract
Background: Recent research suggests that an athlete’s gaze behavior plays a significant role in expert sport performance. However, there is a lack of studies investigating sex differences in gaze behavior during technical and tactical actions. Objectives: Therefore, the purpose of this study was [...] Read more.
Background: Recent research suggests that an athlete’s gaze behavior plays a significant role in expert sport performance. However, there is a lack of studies investigating sex differences in gaze behavior during technical and tactical actions. Objectives: Therefore, the purpose of this study was to analyze the eye movements of elite female and male handball goalkeepers during penalty throws. Methods: In total, 40 handball goalkeepers participated in the study (female: n = 20; male: n = 20). Eye movements were recorded during a series of five penalty throws in real-time conditions. The number of fixations and dwell time, including quiet eye, for selected areas of interest were recorded using a mobile eye-tracking system. Results: Significant differences were found in quiet-eye duration between effective and ineffective goalkeeper interventions (females: mean difference (MD) = 92.26; p = 0.005; males: MD = 122.83; p < 0.001). Significant differences in gaze behavior between female and male handball goalkeepers were observed, specifically in the number of fixations and fixation duration on the selected areas of interest (AOIs). Male goalkeepers primarily observed the throwing upper arm AOI, the throwing forearm (MD = 15.522; p < 0.001), the throwing arm AOI (MD = 6.83; p < 0.001), and the ball (MD = 7.459; z = 3.47; p < 0.001), whereas female goalkeepers mainly observed the torso AOI (MD = 14.264; p < 0.001) and the head AOI (MD = 11.91; p < 0.001) of the throwing player. Conclusions: The results suggest that female goalkeepers’ gaze behavior is based on a relatively constant observation of body areas to recall task-specific information from memory, whilst male goalkeepers mainly observe moving objects in spatio-temporal areas. From a practical perspective, these results can be used to develop perceptual training programs tailored to athletes’ sex. Full article
(This article belongs to the Special Issue Advances in Assessment and Training of Perceptual-Motor Performance)
Show Figures

Figure 1

18 pages, 2455 KiB  
Article
Depth and Embodiment Being Present in Architectural Space as an Experience of Meaning
by Yael Canetti Yaffe and Edna Langenthal
Philosophies 2025, 10(2), 33; https://doi.org/10.3390/philosophies10020033 - 14 Mar 2025
Viewed by 1487
Abstract
Following philosopher Maurice Merleau-Ponty’s unique phenomenology of embodiment and his understanding of three-dimensional space as existential rather than geometric, the article claims that despite sophisticated algorithmic imaging tools, architectural space as a space of meaningful experience does not subject itself to both two-dimensional [...] Read more.
Following philosopher Maurice Merleau-Ponty’s unique phenomenology of embodiment and his understanding of three-dimensional space as existential rather than geometric, the article claims that despite sophisticated algorithmic imaging tools, architectural space as a space of meaningful experience does not subject itself to both two-dimensional and three-dimensional representations and simulations. Merleau-Ponty’s phenomenology is instrumental in helping identify a “blind spot” in contemporary architecture design process. Our experience of built space is always far more saturated (with regard both to the input of the senses and our cultural and personal background) than any sophisticated tool of representation. This paper draws a direct link between the invention of linear perspective and the use of digital three-dimensional visualization and the popular opinion that these are reliable tools with which to create architecture. A phenomenological analysis of Beaubourg Square in Paris serves as a case study that reveals the basic difference between experiencing space from the point of view of the actual subjective body who is present in space and experiencing designed space by gazing at its representation on a two-dimensional screen. Relying more and more on computation in architectural design leads to a rational mathematical conception of architectural space, whereas the human body as the actual experiencing presence of this space is overlooked. This article claims that in cases of great architecture, such as Beaubourg Square in Paris, the lived-experience of the built space is also the experience of bodily presence, which is a unique mode of existential meaning, which cannot be simulated or represented. Full article
Show Figures

Figure 1

18 pages, 1223 KiB  
Article
GazeCapsNet: A Lightweight Gaze Estimation Framework
by Shakhnoza Muksimova, Yakhyokhuja Valikhujaev, Sabina Umirzakova, Jushkin Baltayev and Young Im Cho
Sensors 2025, 25(4), 1224; https://doi.org/10.3390/s25041224 - 17 Feb 2025
Cited by 1 | Viewed by 1591
Abstract
Gaze estimation is increasingly pivotal in applications spanning virtual reality, augmented reality, and driver monitoring systems, necessitating efficient yet accurate models for mobile deployment. Current methodologies often fall short, particularly in mobile settings, due to their extensive computational requirements or reliance on intricate [...] Read more.
Gaze estimation is increasingly pivotal in applications spanning virtual reality, augmented reality, and driver monitoring systems, necessitating efficient yet accurate models for mobile deployment. Current methodologies often fall short, particularly in mobile settings, due to their extensive computational requirements or reliance on intricate pre-processing. Addressing these limitations, we present Mobile-GazeCapsNet, an innovative gaze estimation framework that harnesses the strengths of capsule networks and integrates them with lightweight architectures such as MobileNet v2, MobileOne, and ResNet-18. This framework not only eliminates the need for facial landmark detection but also significantly enhances real-time operability on mobile devices. Through the innovative use of Self-Attention Routing, GazeCapsNet dynamically allocates computational resources, thereby improving both accuracy and efficiency. Our results demonstrate that GazeCapsNet achieves competitive performance by optimizing capsule networks for gaze estimation through Self-Attention Routing (SAR), which replaces iterative routing with a lightweight attention-based mechanism, improving computational efficiency. Our results show that GazeCapsNet achieves state-of-the-art (SOTA) performance on several benchmark datasets, including ETH-XGaze and Gaze360, achieving a mean angular error (MAE) reduction of up to 15% compared to existing models. Furthermore, the model maintains a real-time processing capability of 20 milliseconds per frame while requiring only 11.7 million parameters, making it exceptionally suitable for real-time applications in resource-constrained environments. These findings not only underscore the efficacy and practicality of GazeCapsNet but also establish a new standard for mobile gaze estimation technologies. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Back to TopTop