Spatial Guidance Overrides Dynamic Saliency in VR: An Eye-Tracking Study on Gestalt Grouping Mechanisms and Visual Attention Patterns
Highlights
- Spatial nodes override dynamic saliency in guiding visual attention in VR.
- Continuous structures effectively organize gaze paths better than isolated dynamic elements.
- Proximity to the viewer’s body anchors initial attention in immersive environments.
- Similarity and common fate principles are ineffective without spatial relevance in VR.
- Eye tracking confirms that spatial cognitive intent dominates perceptual organization in VR.
Abstract
1. Introduction
2. Literature Review
2.1. Research Progress on Visual Perception of VR Films
2.2. VR Adaptive Correction of Gestalt Principles
2.3. Empirical Evidence of Eye-Tracking of Gestalt Principles
2.4. Research Hypotheses
3. Materials and Methods
3.1. Experimental Design
3.2. Participants
3.3. Experimental Procedure
- (1)
- Informing the subjects about the experimental procedure, with instructions to terminate the experiment immediately if they experienced dizziness or discomfort. During this step, participants were also informed of the experimental purpose and guided to pay particular attention to the narrative aspect of the VR movie.
- (2)
- Participants underwent visual calibration to ensure the precision of the eye-tracking data.
- (3)
- Participants commenced viewing the movie, with data collection taking place concurrently.
3.4. Experimental Equipment
3.5. Data Collection and Pre-Processing
3.6. Operationalization of Objectives and Hypotheses
4. Results
4.1. Applicability of Gestalt Principles
4.2. Examine VR’s Impact on Grouping Efficiency
4.3. Correlate Subjective Experience with Grouping Effects
5. Discussion
5.1. Space Node: As a Target-Oriented Predictive Cognitive Anchor Point
5.2. Continuity: Narrative Guidance Mechanism Based on Body Movements
5.3. Attention Shifting Mechanism: From Near-Object Anchoring to Target Control
5.4. Practical Implications for VR Design
5.5. Research Limitations and Future Prospects
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Swillen, A.; Vandeweerd, G.; Vanoeveren, I. Breaking the fifth wall: On creating a sense of wonder with (in) mixed realities through XR performance, creative AI, and embodying virtual identities. An encounter in VRChat: Anneleen Swillen and Guus Vandeweerd in conversation with Ine Vanoeveren. Int. J. Perform. Arts Digit. Media 2024, 20, 142–158. [Google Scholar] [CrossRef]
- Slater, M.; Sanchez-Vives, M.V. Enhancing our lives with immersive virtual reality. Front. Robot. AI 2016, 3, 74. [Google Scholar] [CrossRef]
- Carpio, R.; Birt, J.; Baumann, O. Using case study analysis to develop heuristics to guide new filmmaking techniques in embodied virtual reality films. Creat. Ind. J. 2025, 18, 3–24. [Google Scholar] [CrossRef]
- Guo, M.; Gao, H.; Yang, S.; Yue, K.; Liu, Y.; Wang, Y. Evaluation of stereoscopic visual fatigue in virtual reality with exploration of brain dynamics. Displays 2025, 87, 102898. [Google Scholar] [CrossRef]
- Li, P.; Zhang, X.; Hu, X.; Xu, B.; Zhang, J. Theoretical model and practical analysis of immersive industrial design education based on virtual reality technology. Int. J. Technol. Des. Educ. 2024, 35, 1543–1570. [Google Scholar] [CrossRef]
- Barreda-Ángeles, M.; Aleix-Guillaume, S.; Pereda-Baños, A. Virtual reality storytelling as a double-edged sword: Immersive presentation of nonfiction 360-video is associated with impaired cognitive information processing. Commun. Monogr. 2021, 88, 154–173. [Google Scholar] [CrossRef]
- Wagemans, J.; Elder, J.H.; Kubovy, M.; Palmer, S.E.; Peterson, M.A.; Singh, M.; Von der Heydt, R. A century of Gestalt psychology in visual perception: I. Perceptual grouping and figure–ground organization. Psychol. Bull. 2012, 138, 1172. [Google Scholar] [CrossRef]
- Kim, B.; Reif, E.; Wattenberg, M.; Bengio, S.; Mozer, M.C. Neural networks trained on natural scenes exhibit gestalt closure. Comput. Brain Behav. 2021, 4, 251–263. [Google Scholar] [CrossRef]
- Chalbi, A.; Ritchie, J.; Park, D.; Choi, J.; Roussel, N.; Elmqvist, N.; Chevalier, F. Common fate for animated transitions in visualization. IEEE Trans. Vis. Comput. Graph. 2019, 26, 386–396. [Google Scholar] [CrossRef]
- Kawabata, H.; Zeki, S. Neural correlates of beauty. J. Neurophysiol. 2004, 91, 1699–1705. [Google Scholar] [CrossRef]
- Pearce, M.T.; Zaidel, D.W.; Vartanian, O.; Skov, M.; Leder, H.; Chatterjee, A.; Nadal, M. Neuroaesthetics: The cognitive neuroscience of aesthetic experience. Perspect. Psychol. Sci. 2016, 11, 265–279. [Google Scholar] [CrossRef] [PubMed]
- Heilman, K.M.; Acosta, L.M. Visual artistic creativity and the brain. Prog. Brain Res. 2013, 204, 19–43. [Google Scholar] [CrossRef] [PubMed]
- Riva, G.; Baños, R.M.; Botella, C.; Mantovani, F.; Gaggioli, A. Transforming experience: The potential of augmented reality and virtual reality for enhancing personal and clinical change. Front. Psychiatry 2016, 7, 164. [Google Scholar] [CrossRef]
- Woodworth, J.W.; Borst, C.W. Visual cues in VR for guiding attention vs. restoring attention after a short distraction. Comput. Graph. 2024, 118, 194–209. [Google Scholar] [CrossRef]
- Säks, E. Lighting in cinematic virtual reality. Int. J. Stereo Immersive Media 2023, 7, 122–135. [Google Scholar] [CrossRef]
- Potter, T.; Cvetković, Z.; De Sena, E. On the relative importance of visual and spatial audio rendering on vr immersion. Front. Signal Process. 2022, 2, 904866. [Google Scholar] [CrossRef]
- Li, Y.-X.; Luo, G.; Xu, Y.-K.; He, Y.; Zhang, F.-L.; Zhang, S.-H. AdaPIP: Adaptive picture-in-picture guidance for 360° film watching. Comput. Vis. Media 2024, 10, 487–503. [Google Scholar] [CrossRef]
- Norouzi, N.; Bruder, G.; Erickson, A.; Kim, K.; Bailenson, J.; Wisniewski, P.; Hughes, C.; Welch, G. Virtual animals as diegetic attention guidance mechanisms in 360-degree experiences. IEEE Trans. Vis. Comput. Graph. 2021, 27, 4321–4331. [Google Scholar] [CrossRef]
- Vogt, S.; Magnussen, S. Expertise in pictorial perception: Eye-movement patterns and visual memory in artists and laymen. Perception 2007, 36, 91–100. [Google Scholar] [CrossRef]
- Rossi, M.; Hausmann, A.E.; Alcami, P.; Moest, M.; Roussou, R.; Van Belleghem, S.M.; Wright, D.S.; Kuo, C.-Y.; Lozano-Urrego, D.; Maulana, A. Adaptive introgression of a visual preference gene. Science 2024, 383, 1368–1373. [Google Scholar] [CrossRef]
- Berman, M.G.; Jonides, J.; Kaplan, S. The cognitive benefits of interacting with nature. Psychol. Sci. 2008, 19, 1207–1212. [Google Scholar] [CrossRef]
- Vessel, E.A.; Isik, A.I.; Belfi, A.M.; Stahl, J.L.; Starr, G.G. The default-mode network represents aesthetic appeal that generalizes across visual domains. Proc. Natl. Acad. Sci. USA 2019, 116, 19155–19164. [Google Scholar] [CrossRef]
- Byrne, S.A.; Maquiling, V.; Nyström, M.; Kasneci, E.; Niehorster, D.C. LEyes: A lightweight framework for deep learning-based eye tracking using synthetic eye images. Behav. Res. Methods 2025, 57, 129. [Google Scholar] [CrossRef]
- Berkman, M.I. Eye tracking in virtual reality. In Encyclopedia of Computer Graphics and Games; Springer: Cham, Switzerland, 2024; pp. 681–688. [Google Scholar] [CrossRef]
- Lamb, M.; Brundin, M.; Perez Luque, E.; Billing, E. Eye-tracking beyond peripersonal space in virtual reality: Validation and best practices. Front. Virtual Real. 2022, 3, 864653. [Google Scholar] [CrossRef]
- Bruckert, A.; Christie, M.; Le Meur, O. Where to look at the movies: Analyzing visual attention to understand movie editing. Behav. Res. Methods 2023, 55, 2940–2959. [Google Scholar] [CrossRef]
- Adhanom, I.B.; MacNeilage, P.; Folmer, E. Eye tracking in virtual reality: A broad review of applications and challenges. Virtual Real. 2023, 27, 1481–1505. [Google Scholar] [CrossRef] [PubMed]
- Sitzmann, V.; Serrano, A.; Pavel, A.; Agrawala, M.; Gutierrez, D.; Masia, B.; Wetzstein, G. Saliency in VR: How do people explore virtual environments? IEEE Trans. Vis. Comput. Graph. 2018, 24, 1633–1642. [Google Scholar] [CrossRef] [PubMed]
- Lee, G.; Lee, D.Y.; Su, G.-M.; Manocha, D. “may i speak?”: Multi-modal attention guidance in social vr group conversations. IEEE Trans. Vis. Comput. Graph. 2024, 30, 2287–2297. [Google Scholar] [CrossRef] [PubMed]
- Holm, S.K.; Häikiö, T.; Olli, K.; Kaakinen, J.K. Eye movements during dynamic scene viewing are affected by visual attention skills and events of the scene: Evidence from first-person shooter gameplay videos. J. Eye Mov. Res. 2021, 14, 3. [Google Scholar] [CrossRef]
- Chen, G.; Gong, P. A spatiotemporal mechanism of visual attention: Superdiffusive motion and theta oscillations of neural population activity patterns. Sci. Adv. 2022, 8, eabl4995. [Google Scholar] [CrossRef]
- Liu, R.; Xu, X.; Yang, H.; Li, Z.; Huang, G. Impacts of cues on learning and attention in immersive 360-degree video: An eye-tracking study. Front. Psychol. 2022, 12, 792069. [Google Scholar] [CrossRef]
- Pastel, S.; Marlok, J.; Bandow, N.; Witte, K. Application of eye-tracking systems integrated into immersive virtual reality and possible transfer to the sports sector—A systematic review. Multimed. Tools Appl. 2023, 82, 4181–4208. [Google Scholar] [CrossRef]
- Welchman, A.E.; Deubelius, A.; Conrad, V.; Bülthoff, H.H.; Kourtzi, Z. 3D shape perception from combined depth cues in human visual cortex. Nat. Neurosci. 2005, 8, 820–827. [Google Scholar] [CrossRef]
- Leppanen, J.; Patsalos, O.; Surguladze, S.; Kerr-Gaffney, J.; Williams, S.; Tchanturia, K. Evaluation of film stimuli for the assessment of social-emotional processing: A pilot study. PeerJ 2022, 10, e14160. [Google Scholar] [CrossRef]
- Van Geert, E.; Wagemans, J. Prägnanz in visual perception. Psychon. Bull. Rev. 2024, 31, 541–567. [Google Scholar] [CrossRef] [PubMed]
- Azarby, S.; Rice, A. Understanding the effects of virtual reality system usage on spatial perception: The Potential impacts of immersive virtual reality on spatial design decisions. Sustainability 2022, 14, 10326. [Google Scholar] [CrossRef]
- Polys, N.F.; Bowman, D.A.; North, C. The role of depth and gestalt cues in information-rich virtual environments. Int. J. Hum.-Comput. Stud. 2011, 69, 30–51. [Google Scholar] [CrossRef]
- Foglino, C.; Watson, T.; Stein, N.; Fattori, P.; Bosco, A. The effect of viewing-only, reaching, and grasping on size perception in virtual reality. PLoS ONE 2025, 20, e0326377. [Google Scholar] [CrossRef] [PubMed]
- Morimoto, T.; Mizokami, Y.; Yaguchi, H.; Buck, S.L. Color constancy in two-dimensional and three-dimensional scenes: Effects of viewing methods and surface texture. i-Perception 2017, 8, 2041669517743522. [Google Scholar] [CrossRef]
- Villalba-García, C.; Santaniello, G.; Luna, D.; Montoro, P.; Hinojosa, J. Temporal brain dynamics of the competition between proximity and shape similarity grouping cues in vision. Neuropsychologia 2018, 121, 88–97. [Google Scholar] [CrossRef]
- Stein, N.; Watson, T.; Lappe, M.; Westendorf, M.; Durant, S. Eye and head movements in visual search in the extended field of view. Sci. Rep. 2024, 14, 8907. [Google Scholar] [CrossRef]
- Drigas, A.; Sideraki, A. Brain neuroplasticity leveraging virtual reality and brain–computer interface technologies. Sensors 2024, 24, 5725. [Google Scholar] [CrossRef]
- Chang, E.; Kim, H.T.; Yoo, B. Predicting cybersickness based on user’s gaze behaviors in HMD-based virtual reality. J. Comput. Des. Eng. 2021, 8, 728–739. [Google Scholar] [CrossRef]
- Juliano, J.M.; Schweighofer, N.; Liew, S.-L. Increased cognitive load in immersive virtual reality during visuomotor adaptation is associated with decreased long-term retention and context transfer. J. Neuroeng. Rehabil. 2022, 19, 106. [Google Scholar] [CrossRef] [PubMed]
- Mahanama, B.; Jayawardana, Y.; Rengarajan, S.; Jayawardena, G.; Chukoskie, L.; Snider, J.; Jayarathna, S. Eye Movement and Pupil Measures: A Review. Front. Comput. Sci. 2022, 3, 733531. [Google Scholar] [CrossRef]
- Chuang, H.-C.; Tseng, H.-Y.; Tang, D.-L. An eye tracking study of the application of gestalt theory in photography. J. Eye Mov. Res. 2023, 16, 5. [Google Scholar] [CrossRef] [PubMed]
- Bischof, W.F.; Anderson, N.C.; Kingstone, A. A tutorial: Analyzing eye and head movements in virtual reality. Behav. Res. Methods 2024, 56, 8396–8421. [Google Scholar] [CrossRef]
- Marianovski, M.; Wilcox, L. The effect of grouping by common fate on stereoscopic depth estimates. J. Vis. 2016, 16, 834. [Google Scholar] [CrossRef]
- Abu-Rayyash, H.; Lacruz, I. Through the Eyes of the Viewer: The Cognitive Load of LLM-Generated vs. Professional Arabic Subtitles. J. Eye Mov. Res. 2025, 18, 29. [Google Scholar] [CrossRef]
- Hu, Z.; Bulling, A.; Li, S.; Wang, G. Fixationnet: Forecasting eye fixations in task-oriented virtual environments. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2681–2690. [Google Scholar] [CrossRef]
- Williams, N.L.; Stevens, L.C.; Bera, A.; Manocha, D. Sensitivity to Redirected Walking Considering Gaze, Posture, and Luminance. IEEE Trans. Vis. Comput. Graph. 2025, 31, 3223–3234. [Google Scholar] [CrossRef]
- Manor, B.R.; Gordon, E. Defining the temporal threshold for ocular fixation in free-viewing visuocognitive tasks. J. Neurosci. Methods 2003, 128, 85–93. [Google Scholar] [CrossRef]
- Nolte, D.; Vidal De Palol, M.; Keshava, A.; Madrid-Carvajal, J.; Gert, A.L.; von Butler, E.-M.; Kömürlüoğlu, P.; König, P. Combining EEG and eye-tracking in virtual reality: Obtaining fixation-onset event-related potentials and event-related spectral perturbations. Atten. Percept. Psychophys. 2024, 87, 207–227. [Google Scholar] [CrossRef]
- Drews, M.; Dierkes, K. Strategies for enhancing automatic fixation detection in head-mounted eye tracking. Behav. Res. Methods 2024, 56, 6276–6298. [Google Scholar] [CrossRef]
- Schmälzle, R.; Huskey, R. Integrating media content analysis, reception analysis, and media effects studies. Front. Neurosci. 2023, 17, 1155750. [Google Scholar] [CrossRef]
- Theeuwes, J.; Reimann, B.; Mortier, K. Visual search for featural singletons: No top-down modulation, only bottom-up priming. Vis. Cogn. 2006, 14, 466–489. [Google Scholar] [CrossRef]
- Franconeri, S.L.; Hollingworth, A.; Simons, D.J. Do new objects capture attention? Psychol. Sci. 2005, 16, 275–281. [Google Scholar] [CrossRef] [PubMed]
- Baldauf, D.; Desimone, R. Neural mechanisms of object-based attention. Science 2014, 344, 424–427. [Google Scholar] [CrossRef] [PubMed]
- Vienne, C.; Masfrand, S.; Bourdin, C.; Vercher, J.-L. Depth perception in virtual reality systems: Effect of screen distance, environment richness and display factors. IEEE Access 2020, 8, 29099–29110. [Google Scholar] [CrossRef]
- DeAngelus, M.; Pelz, J.B. Top-down control of eye movements: Yarbus revisited. Vis. Cogn. 2009, 17, 790–811. [Google Scholar] [CrossRef]
- Yu, K.; Chen, J.; Ding, X.; Zhang, D. Exploring cognitive load through neuropsychological features: An analysis using fNIRS-eye tracking. Med. Biol. Eng. Comput. 2024, 63, 45–57. [Google Scholar] [CrossRef]
- Krejtz, K.; Duchowski, A.T.; Niedzielska, A.; Biele, C.; Krejtz, I. Eye tracking cognitive load using pupil diameter and microsaccades with fixed gaze. PLoS ONE 2018, 13, e0203629. [Google Scholar] [CrossRef] [PubMed]
- Just, M.A.; Carpenter, P.A. Eye fixations and cognitive processes. Cogn. Psychol. 1976, 8, 441–480. [Google Scholar] [CrossRef]
- Bradley, M.M.; Lang, P.J. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef] [PubMed]
- Jeong, D.; Jeong, M.; Yang, U.; Han, K. Eyes on me: Investigating the role and influence of eye-tracking data on user modeling in virtual reality. PLoS ONE 2022, 17, e0278970. [Google Scholar] [CrossRef] [PubMed]
- Shadiev, R.; Li, D. A review study on eye-tracking technology usage in immersive virtual reality learning environments. Comput. Educ. 2023, 196, 104681. [Google Scholar] [CrossRef]
- Bender, S. Headset attentional synchrony: Tracking the gaze of viewers watching narrative virtual reality. Media Pract. Educ. 2019, 20, 277–296. [Google Scholar] [CrossRef]
- Prokopic, P.; Sayer, J. Expanding Interactivity in Film: Emergent Narratives through VR and Spatialized Cinematic Experiences. Interact. Film. Media J. 2025, 5, 29–42. [Google Scholar] [CrossRef]
- Psarra, S. Architecture and Narrative: The Formation of Space and Cultural Meaning. J. Archit. 2010, 15, 543–549. [Google Scholar] [CrossRef]
- Yang, Y.; Cai, H.; Yang, Z.; Zhao, X.; Li, M.; Han, R.; Chen, S.X. Why does nature enhance psychological well-being?A Self-Determination account. J. Environ. Psychol. 2022, 83, 101872. [Google Scholar] [CrossRef]
- Franchak, J.M.; McGee, B.; Blanch, G. Adapting the coordination of eyes and head to differences in task and environment during fully-mobile visual exploration. PLoS ONE 2021, 16, e0256463. [Google Scholar] [CrossRef]
- Kim, N. Capturing initial gaze attraction in branded spaces through VR eye-tracking technology. Int. J. Hum. –Comput. Interact. 2025, 41, 4392–4405. [Google Scholar] [CrossRef]
- Mertens, M.; Binkofski, F.; Leitão, B.; Grii, B.; Rodriguez-Raecke, R.; Schüppen, A.; Pellicano, A.; Lorentz, L.; Sijben, R. Peripersonal and Extrapersonal Space Encoding in Virtual Reality: Insights from an fMRI Study. NeuroImage 2025, 121325. [Google Scholar] [CrossRef] [PubMed]
- Lindgren, R.; Johnson-Glenberg, M. Emboldened by embodiment: Six precepts for research on embodied learning and mixed reality. Educ. Res. 2013, 42, 445–452. [Google Scholar] [CrossRef]
- König, S.U.; Keshava, A.; Clay, V.; Rittershofer, K.; Kuske, N.; König, P. Embodied spatial knowledge acquisition in immersive virtual reality: Comparison to map exploration. Front. Virtual Real. 2021, 2, 625548. [Google Scholar] [CrossRef]
- Uz-Bilgin, C.; Thompson, M. Processing presence: How users develop spatial presence through an immersive virtual reality game. Virtual Real. 2022, 26, 649–658. [Google Scholar] [CrossRef]
- Chu, J.H.; Mazalek, A. Embodied engagement with narrative: A design framework for presenting cultural heritage artifacts. Multimodal Technol. Interact. 2019, 3, 1. [Google Scholar] [CrossRef]
- Slater, M.; Lotto, B.; Arnold, M.M.; Sanchez-Vives, M.V. How we experience immersive virtual environments: The concept of presence and its measurement. Anu. Psicol. 2009, 40, 193–210. [Google Scholar]
- Lallé, S.; Toker, D.; Conati, C. Gaze-Driven Adaptive Interventions for Magazine-Style Narrative Visualizations. IEEE Trans. Vis. Comput. Graph. 2019, 27, 2941–2952. [Google Scholar] [CrossRef]
- Haji-Abolhassani, A.; Clark, J.J. An inverse Yarbus process: Predicting observers’ task from eye movement patterns. Vis. Res. 2014, 103, 127–142. [Google Scholar] [CrossRef]
- Clarke, A.D.; Mahon, A.; Irvine, A.; Hunt, A.R. People are unable to recognize or report on their own eye movements. Q. J. Exp. Psychol. 2017, 70, 2251–2270. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Chen, T.; Shaffer, E.; Soltanaghai, E. FocusFlow: 3D Gaze-Depth Interaction in Virtual Reality Leveraging Active Visual Depth Manipulation. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–18. [Google Scholar] [CrossRef]
- Nobre, A.d.P.; Nikolaev, A.R.; Gauer, G.; van Leeuwen, C.; Wagemans, J. Effects of Temporal Expectations on the Perception of Motion Gestalts. J. Cogn. Neurosci. 2021, 33, 853–871. [Google Scholar] [CrossRef] [PubMed]
Objective | Hypotheses | Gestalt Principle | Purpose |
---|---|---|---|
O1: Assess the applicability of Gestalt principles | H1 | Closure | Validate if static spatial structures override dynamic saliency in guiding attention. |
H4 | Similarity | Test if the similarity principle is ineffective in VR due to spatial discreteness. | |
H5 | Common Fate | Investigate whether dynamic grouping requires cognitive intent matching in VR. | |
O2: Examine VR’s impact on grouping efficiency | H2 | Continuity | Quantify how spatial continuity guides gaze paths and suppresses irrelevant dynamics. |
H3 | Proximity | Confirm the proximity law’s role in initial environmental orientation. | |
O3: Correlate subjective experience with grouping effects | Implicit in all hypotheses | N/A | Link eye-tracking metrics to psychological closure (H1) and narrative certainty (H2). |
Category | Type | Sample Count | Percentage |
---|---|---|---|
Gender | Male | 25 | 59.5% |
Female | 17 | 40.5% | |
Age Range | 18~24 years old | 21 | 50.0% |
24~30 years old | 15 | 35.7% | |
30~36 years old | 6 | 14.3% | |
Familiarity with VR Technology | Never did | 3 | 7.1% |
A little | 32 | 76.2% | |
Very well | 7 | 16.7% | |
Familiarity with VR Films | Never did | 13 | 31.0% |
Occasionally | 26 | 61.9% | |
Often | 3 | 7.1% |
Eye-Tracking Indicators | Group Comparison of AOI | Mean Difference | Std. Error | Significance | Cohen’s d | 95% CI | |
---|---|---|---|---|---|---|---|
Lower | Upper | ||||||
TFD | AOI1 vs. AOI2 | 5085.200 * | 1044.999 | <0.001 | 3.12 | 2316.89 | 7853.51 |
AOI1 vs. AOI4 | 5176.525 * | 985.241 | <0.001 | 3.45 | 2548.95 | 7804.1 | |
AOI2 vs. AOI4 | 91.325 | 517.865 | 0.998 | 0.12 | −1274.3 | 1456.95 | |
FC | AOI1 vs. AOI4 | 16.675 * | 3.178 | <0.001 | 2.78 | 8.27 | 25.08 |
AOI2 vs. AOI4 | −1.625 | 2.568 | 0.921 | −0.34 | −8.39 | 5.14 | |
SAC | AOI1 vs. AOI4 | 9.850 * | 2.141 | <0.001 | 2.45 | 4.15 | 15.55 |
AOI4 vs. AOI3 | −3.350 * | 0.705 | <0.001 | −2.52 | −5.22 | −1.48 |
Eye-Tracking Indicators | Group Comparison of AOI | Mean Difference | Std Error | Significance | Cohen’s d | 95% CI | |
---|---|---|---|---|---|---|---|
Lower | Upper | ||||||
SAC | AOI3 vs. AOI4 | 3.350 * | 0.705 | <0.001 | 2.52 | 1.48 | 5.22 |
TTFF | AOI1 vs. AOI4 | −5053.100 * | 1143.031 | <0.001 | −3.45 | −8066.4 | −2039.8 |
AOI3 vs. AOI4 | −3990.175 | 1604.706 | 0.071 | −1.73 | 8209.74 | 229.39 | |
AOI1 vs. AOI3 | −1062.925 | 1415.748 | 0.876 | −0.51 | −4812.3 | 2686.45 |
Eye-Tracking Indicators | Group Comparison of AOI | Mean Difference | Std. Error | Significance | Cohen’s d | 95% CI | |
---|---|---|---|---|---|---|---|
Lower | Upper | ||||||
TFD | AOI3 vs. AOI4 | 1962.500 * | 288.938 | <0.001 | 3.65 | 1194.35 | 2730.65 |
FC | AOI3 vs. AOI4 | 11.225 * | 1.545 | <0.001 | 3.88 | 7.13 | 15.32 |
SAC | AOI1 vs. AOI4 | 2275.450 * | 720.501 | <0.001 | 1.72 | 346.588 | 4204.312 |
AOI3 vs. AOI4 | −310.825 * | 185.236 | <0.001 | −0.92 | −797.444 | 175.794 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zou, Q.; Zheng, W.; Jiang, X.; Li, D. Spatial Guidance Overrides Dynamic Saliency in VR: An Eye-Tracking Study on Gestalt Grouping Mechanisms and Visual Attention Patterns. J. Eye Mov. Res. 2025, 18, 37. https://doi.org/10.3390/jemr18050037
Zou Q, Zheng W, Jiang X, Li D. Spatial Guidance Overrides Dynamic Saliency in VR: An Eye-Tracking Study on Gestalt Grouping Mechanisms and Visual Attention Patterns. Journal of Eye Movement Research. 2025; 18(5):37. https://doi.org/10.3390/jemr18050037
Chicago/Turabian StyleZou, Qiaoling, Wanyu Zheng, Xinyan Jiang, and Dongning Li. 2025. "Spatial Guidance Overrides Dynamic Saliency in VR: An Eye-Tracking Study on Gestalt Grouping Mechanisms and Visual Attention Patterns" Journal of Eye Movement Research 18, no. 5: 37. https://doi.org/10.3390/jemr18050037
APA StyleZou, Q., Zheng, W., Jiang, X., & Li, D. (2025). Spatial Guidance Overrides Dynamic Saliency in VR: An Eye-Tracking Study on Gestalt Grouping Mechanisms and Visual Attention Patterns. Journal of Eye Movement Research, 18(5), 37. https://doi.org/10.3390/jemr18050037