From Viewing to Structure: A Computational Framework for Modeling and Visualizing Visual Exploration
Abstract
1. Introduction
1.1. Research Background
1.2. Visual Mechanisms and Task Characteristics of Viewing Behavior
Theoretical Rationale for Focusing on Designers
1.3. The Conceptual Framework of the Viewing Process
1.3.1. Fixation Regions
1.3.2. Regional Associations
1.3.3. Weights
1.3.4. Temporal Dynamics
1.3.5. Summary and Hypothesis
2. Literature Review
2.1. Research Progress in Fixation Point Coordinate Transformation
2.2. Comparison of Clustering Analysis Methods
2.3. Extremum Point Methods and Their Applications
2.4. Boundary Construction and Linear Interpolation
2.5. Relationship Strength Quantification Methods
3. Research Design
3.1. Proposed Algorithm
3.1.1. Fixation Point Coordinate Transformation Method
3.1.2. Calculation of the Shortest Distance from a Point to a Line Segment
- The distance between point P and endpoint A, |AP|;
- The distance between point P and endpoint B, |BP|;
- The distance between projection point H of P on AB and P, |PH|.
3.1.3. Coordinate Transformation Steps
3.1.4. Projection Point Calculation and Error Control
3.2. Fixation Point Coordinate Transformation
3.3. Spatial Analysis Framework for Visual Attention
3.3.1. K-Means Clustering Implementation
3.3.2. Extreme Point Algorithm
3.3.3. Boundary Construction and Linear Interpolation
3.3.4. Boundary Method Validation
Area Over-Coverage Quantification
Relationship Strength Comparison
- The correlation between EP- and CH-based relationship strengths is .
- The top three strongest inter-AOI connections are identical across both methods.
- The average strength values using the EP method are approximately 18.5% higher, reflecting consistent, controlled inflation.
Methodological Justification
- It accommodates transitional fixations that cross AOI boundaries.
- It captures exploratory gaze patterns typical in aesthetic viewing.
- It maintains spatial coherence for visual network interpretation.
3.4. Relationship Strength Quantification Methods
3.4.1. Relationship Strength Function
- , are regions of interest with centroids ;
- is the set of fixation points;
- is the buffer zone width parameter;
- denotes the line segment connecting centroids and ;
- is the expanded buffer region centered on with width w.
3.4.2. Visual Representation
4. Experimental Design
4.1. Experimental Process
4.2. Participants and Equipment
Experimental Setting
4.3. Research Results Analysis
4.3.1. Fixation Regions
4.3.2. Relationships
4.3.3. Weight
4.3.4. Time: Integrative Analysis of Visual Structure and Verbal Utterances in the Temporal Dimension
4.3.5. Quantitative Validation of Temporal Dynamics
- Pre-speech convergence (−1000 ms to −200 ms): Saccade rate decreases, fixation stability increases, and gaze dispersion narrows—indicating attentional focusing before speech.
- Speech initiation (−200 ms to +200 ms): All parameters show sharp transitions—saccade rate peaks, fixation stability dips, and dispersion expands—suggesting strong visual–cognitive engagement.
- Post-speech decay (+200 ms to +1000 ms): A clear recovery pattern occurs, with exponential normalization of all gaze metrics over approximately 800 ms, reflecting cognitive reset and re-stabilization.
4.3.6. Phase 1 (00:00:07:31–00:00:10:12)
4.3.7. Phase 2 (00:00:21:12–00:00:27:26)
4.3.8. Phase 3 (00:00:36:26–00:00:40:09)
4.3.9. Phase 4 (00:00:41:10–00:00:47:25)
4.3.10. Phase 5 (00:00:50:02–00:00:57:10)
4.3.11. Phase 6 (00:01:10:26–00:01:13:06)
4.3.12. Phase 7 (00:01:17:04–00:01:19:06)
4.3.13. Phase 8 (00:01:21:04–00:01:22:01)
4.4. Comprehensive Analysis: Unveiling the Expression of Structured Visual Behavior
4.5. Framework Validation Through Cross-Participant Analysis
4.5.1. System Stability and Visual Structure Consistency
Cross-Participant Computational Stability
Consistency of Visual Translation Results
4.6. Observations of Differences
4.6.1. Systematic Identification of Three-Dimensional Differences
4.6.2. Differences in Spatial Center-of-Gravity Distribution
4.6.3. Differences in Inter-Regional Connection Strength
4.6.4. Divergent Structures in Weaker Connections
4.6.5. Differences in Cognitive Organizational Centers
5. Conclusions
Limitations
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Cross, N. Designerly Ways of Knowing; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Dorst, K. The core of design thinking’and its application. Des. Stud. 2011, 32, 521–532. [Google Scholar] [CrossRef]
- Bryan, L. How Designers Think: The Design Process Demystified; Architectural Press: Oxford, UK; Elsevier: Amsterdam, The Netherlands, 2005. [Google Scholar]
- Buchanan, R. Wicked problems in design thinking. Soc. Des. Read. 2008, 8, 5–21. [Google Scholar]
- Razzouk, R.; Shute, V. What is design thinking and why is it important? Rev. Educ. Res. 2012, 82, 330–348. [Google Scholar] [CrossRef]
- Yarbus, A.L. Eye movements during perception of complex objects. In Eye Movements and Vision; Springer: Boston, MA, USA, 1967; pp. 171–211. [Google Scholar]
- Hayhoe, M.; Ballard, D. Eye movements in natural behavior. Trends Cogn. Sci. 2005, 9, 188–194. [Google Scholar] [CrossRef] [PubMed]
- Henderson, J.M. Human gaze control during real-world scene perception. Trends Cogn. Sci. 2003, 7, 498–504. [Google Scholar] [CrossRef] [PubMed]
- Holmqvist, K.; Nyström, M.; Andersson, R.; Dewhurst, R.; Jarodzka, H.; Van de Weijer, J. Eye Tracking: A Comprehensive Guide to Methods and Measures; OUP: Oxford, UK, 2011. [Google Scholar]
- Rayner, K. Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 1998, 124, 372. [Google Scholar] [CrossRef] [PubMed]
- Tatler, B.W.; Hayhoe, M.M.; Land, M.F.; Ballard, D.H. Eye guidance in natural vision: Reinterpreting salience. J. Vis. 2011, 11, 5. [Google Scholar] [CrossRef] [PubMed]
- Kundel, H.L.; Nodine, C.F.; Conant, E.F.; Weinstein, S.P. Holistic component of image perception in mammogram interpretation: Gaze-tracking study. Radiology 2007, 242, 396–402. [Google Scholar] [CrossRef] [PubMed]
- Torralba, A.; Oliva, A.; Castelhano, M.S.; Henderson, J.M. Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychol. Rev. 2006, 113, 766. [Google Scholar] [CrossRef] [PubMed]
- Kurzhals, K.; Fisher, B.; Burch, M.; Weiskopf, D. Eye tracking evaluation of visual analytics. Inf. Vis. 2016, 15, 340–358. [Google Scholar] [CrossRef]
- Unema, P.J.; Pannasch, S.; Joos, M.; Velichkovsky, B.M. Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration. Vis. Cogn. 2005, 12, 473–494. [Google Scholar] [CrossRef]
- Noton, D.; Stark, L. Scanpaths in eye movements during pattern perception. Science 1971, 171, 308–311. [Google Scholar] [CrossRef] [PubMed]
- Desimone, R.; Duncan, J. Neural mechanisms of selective visual attention. Annu. Rev. Neurosci. 1995, 18, 193–222. [Google Scholar] [CrossRef] [PubMed]
- Wolfe, J.M. Guided search 2.0 a revised model of visual search. Psychon. Bull. Rev. 1994, 1, 202–238. [Google Scholar] [CrossRef] [PubMed]
- Posner, M.I. Orienting of attention. Q. J. Exp. Psychol. 1980, 32, 3–25. [Google Scholar] [CrossRef] [PubMed]
- Treisman, A.M.; Gelade, G. A feature-integration theory of attention. Cogn. Psychol. 1980, 12, 97–136. [Google Scholar] [CrossRef] [PubMed]
- Egeth, H.E.; Yantis, S. Visual attention: Control, representation, and time course. Annu. Rev. Psychol. 1997, 48, 269–297. [Google Scholar] [CrossRef] [PubMed]
- Locher, P. The usefulness of eye movement recordings to subject an aesthetic episode with visual art to empirical scrutiny. Psychol. Sci. 2006, 48, 106. [Google Scholar]
- Itti, L.; Koch, C. A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 2000, 40, 1489–1506. [Google Scholar] [CrossRef] [PubMed]
- Parkhurst, D.; Law, K.; Niebur, E. Modeling the role of salience in the allocation of overt visual attention. Vis. Res. 2002, 42, 107–123. [Google Scholar] [CrossRef] [PubMed]
- Corbetta, M.; Shulman, G.L. Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci. 2002, 3, 201–215. [Google Scholar] [CrossRef] [PubMed]
- Theeuwes, J. Top-down and bottom-up control of visual selection. Acta Psychol. 2010, 135, 77–99. [Google Scholar] [CrossRef] [PubMed]
- Pomplun, M.; Reingold, E.M.; Shen, J. Investigating the visual span in comparative search: The effects of task difficulty and divided attention. Cognition 2001, 81, B57–B67. [Google Scholar] [CrossRef] [PubMed]
- Rothkegel, L.O.; Schütt, H.H.; Trukenbrod, H.A.; Wichmann, F.A.; Engbert, R. Searchers adjust their eye-movement dynamics to target characteristics in natural scenes. Sci. Rep. 2019, 9, 1635. [Google Scholar] [CrossRef] [PubMed]
- Wolfe, J.M.; Horowitz, T.S.; Kenner, N.M. Rare items often missed in visual searches. Nature 2005, 435, 439–440. [Google Scholar] [CrossRef] [PubMed]
- Grüner, M.; Goller, F.; Ansorge, U. Top-down knowledge surpasses selection history in influencing attentional guidance. Atten. Percept. Psychophys. 2023, 85, 985–1011. [Google Scholar] [CrossRef] [PubMed]
- Dolci, C.; Boehler, C.N.; Santandrea, E.; Dewulf, A.; Ben-Hamed, S.; Macaluso, E.; Chelazzi, L.; Rashal, E. Integrated effects of top-down attention and statistical learning during visual search: An EEG study. Atten. Percept. Psychophys. 2023, 85, 1819–1833. [Google Scholar] [CrossRef] [PubMed]
- Rashal, E.; Santandrea, E.; Ben-Hamed, S.; Macaluso, E.; Chelazzi, L.; Boehler, C.N. Effects of top-down and bottom-up attention on post-selection posterior contralateral negativity. Atten. Percept. Psychophys. 2023, 85, 705–717. [Google Scholar] [CrossRef] [PubMed]
- Wolfe, J.M. Guided Search 6.0: An updated model of visual search. Psychon. Bull. Rev. 2021, 28, 1060–1092. [Google Scholar] [CrossRef] [PubMed]
- Chen, K.C.; Lee, C.F.; Chang, T.W. Visualization of the relationship between void and eye movement scan paths in shan shui paintings. In Proceedings of the 26th International Conference Information Visualisation (IV), Vienna, Austria, 19–22 July 2022; pp. 199–203. [Google Scholar]
- Chang, T.W.; Tsai, S.T.; Huang, H.Y.; Wu, Y.S.; Chang, C.C.; Datta, S. Slow well-being gardening: Creating a sensor network for radiation therapy patients via horticultural therapeutic activity. Sensors 2024, 24, 3771. [Google Scholar] [CrossRef] [PubMed]
- Chang, T.W.; Chen, C.Y.; Huang, H.Y.; Hsieh, T.L.; Huang, W.; Datta, S. ViDA: Developing a visualization system for a Design-Fabrication-Assembly (D-F-A) process. Multimed. Tools Appl. 2022, 81, 14617–14639. [Google Scholar] [CrossRef]
- Chen, K.C.; Lee, C.F.; Hwang, S.H. Scan-paths and semantic deconstruction of Chinese Landscape paintings. J. Des. Stud. 2022, 6. [Google Scholar]
- Hahn, L.; Klein, P. Clustering eye-movement data uncovers students’ strategies for coordinating equations and diagrams of vector fields. Educ. Stud. Math. 2025, 118, 359–385. [Google Scholar] [CrossRef]
- Santella, A.; DeCarlo, D. Robust clustering of eye movement recordings for quantification of visual interest. In Proceedings of the 2004 Symposium on Eye Tracking Research & Applications, San Antonio, TX, USA, 22–24 March 2004; pp. 27–34. [Google Scholar]
- Goldberg, J.H.; Kotval, X.P. Computer interface evaluation using eye movements: Methods and constructs. Int. J. Ind. Ergon. 1999, 24, 631–645. [Google Scholar] [CrossRef]
- Le Meur, O.; Baccino, T. Methods for comparing scanpaths and saliency maps: Strengths and weaknesses. Behav. Res. Methods 2013, 45, 251–266. [Google Scholar] [CrossRef] [PubMed]
- Yang, H.; Zelinsky, G.J. Visual search is guided to categorically-defined targets. Vis. Res. 2009, 49, 2095–2103. [Google Scholar] [CrossRef] [PubMed]
- Zemblys, R.; Niehorster, D.C.; Holmqvist, K. gazeNet: End-to-end eye-movement event detection with deep neural networks. Behav. Res. Methods 2019, 51, 840–864. [Google Scholar] [CrossRef] [PubMed]
- Locher, P.J. Empirical investigation of an aesthetic experience with art. In Aesthetic Science: Connecting Minds, Brains, and Experience; Oxford University Press: New York, NY, USA, 2012; pp. 163–188. [Google Scholar]
- Onuchin, A.; Kachan, O. Individual topology structure of eye movement trajectories. In Advances in Neural Computation, Machine Learning, and Cognitive Research VI, Proceedings of the International Conference on Neuroinformatics, Moscow, Russia, 17–21 October 2022; Springer: Cham, Switzerland, 2022; pp. 45–55. [Google Scholar]
- Marin, M.M.; Leder, H. Gaze patterns reveal aesthetic distance while viewing art. Ann. N. Y. Acad. Sci. 2022, 1514, 155–165. [Google Scholar] [CrossRef] [PubMed]
- Ishiguro, C.; Yokosawa, K.; Okada, T. Eye movements during art appreciation by students taking a photo creation course. Front. Psychol. 2016, 7, 1074. [Google Scholar] [CrossRef] [PubMed]
- Schwetlick, L.; Backhaus, D.; Engbert, R. A dynamical scan-path model for task-dependence during scene viewing. Psychol. Rev. 2023, 130, 807. [Google Scholar] [CrossRef] [PubMed]
- Vasilyev, A. Control of fixation duration during visual search task execution. arXiv 2020, arXiv:2004.10832. [Google Scholar] [CrossRef]
- König, P.; Wilming, N.; Kietzmann, T.C.; Ossandón, J.P.; Onat, S.; Ehinger, B.V.; Gameiro, R.R.; Kaspar, K. Eye movements as a window to cognitive processes. J. Eye Mov. Res. 2016, 9, 1–16. [Google Scholar] [CrossRef]
- Land, M.F. Vision, eye movements, and natural behavior. Vis. Neurosci. 2009, 26, 51–62. [Google Scholar] [CrossRef] [PubMed]
- Guestrin, E.D.; Eizenman, M. General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Trans. Biomed. Eng. 2006, 53, 1124–1133. [Google Scholar] [CrossRef] [PubMed]
- Hennessey, C.A.; Lawrence, P.D. Improving the accuracy and reliability of remote system-calibration-free eye-gaze tracking. IEEE Trans. Biomed. Eng. 2009, 56, 1891–1900. [Google Scholar] [CrossRef] [PubMed]
- Luo, K.; Jia, X.; Xiao, H.; Liu, D.; Peng, L.; Qiu, J.; Han, P. A New Gaze Estimation Method Based on Homography Transformation Derived from Geometric Relationship. Appl. Sci. 2020, 10, 9079. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Leonardis, A. Computer Vision-ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006: Proceedings; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 3951. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Kołodziej, M.; Majkowski, A.; Rak, R.J.; Francuz, P.; Augustynowicz, P. Identifying experts in the field of visual arts using oculomotor signals. J. Eye Mov. Res. 2018, 11, 3. [Google Scholar] [CrossRef] [PubMed]
- Karthikeyan, B.; George, D.J.; Manikandan, G.; Thomas, T. A comparative study on k-means clustering and agglomerative hierarchical clustering. Int. J. Emerg. Trends Eng. Res. 2020, 8, 1600–1604. [Google Scholar] [CrossRef]
- Gao, B.; Wei, K.; Tong, L. An eye diagram parameters measurement method based on K-means clustering algorithm. In Proceedings of the 2019 IEEE MTT-S International Microwave Symposium (IMS), Boston, MA, USA, 2–7 June 2019; pp. 901–904. [Google Scholar]
- Singh, N.; Singh, D. Performance evaluation of k-means and heirarichal clustering in terms of accuracy and running time. IJCSIT Int. J. Comput. Sci. Inf. Technol. 2012, 3, 4119–4121. [Google Scholar]
- Arthur, D.; Vassilvitskii, S. k-means++: The advantages of careful seeding. In Proceedings of the SODA ’07: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–9 January 2007; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2007; pp. 1027–1035. [Google Scholar]
- Raviya, K.H.; Dhinoja, K. An empirical comparison of k-means and DBSCAN clustering algorithm. PARIPEX Indian J. Res. 2013, 2, 153–155. [Google Scholar]
- Cuong, N.H.; Hoang, H.T. Eye-gaze detection with a single WebCAM based on geometry features extraction. In Proceedings of the 11th International Conference on Control Automation Robotics & Vision, Singapore, 7–10 December 2010; pp. 2507–2512. [Google Scholar]
- Ince, I.F.; Yang, T.C. A new low-cost eye tracking and blink detection approach: Extracting eye features with blob extraction. In Proceedings of the Emerging Intelligent Computing Technology and Applications: 5th International Conference on Intelligent Computing, ICIC 2009, Ulsan, Republic of Korea, 16–19 September 2009; Proceedings 5. Springer: Berlin/Heidelberg, Germany, 2009; pp. 526–533. [Google Scholar]
- Becker, W.; Fuchs, A.F. Further properties of the human saccadic system: Eye movements and correction saccades with and without visual fixation points. Vis. Res. 1969, 9, 1247–1258. [Google Scholar] [CrossRef] [PubMed]
- Kasprowski, P.; Harężlak, K.; Stasch, M. Guidelines for the eye tracker calibration using points of regard. In Information Technologies in Biomedicine; Springer: Cham, Switzerland, 2014; Volume 4, pp. 225–236. [Google Scholar]
- Palmer, S.E. Vision Science: Photons to Phenomenology; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
- Hansen, D.W.; Ji, Q. In the eye of the beholder: A survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 478–500. [Google Scholar] [CrossRef] [PubMed]
- Morimoto, C.H.; Mimica, M.R. Eye gaze tracking techniques for interactive applications. Comput. Vis. Image Underst. 2005, 98, 4–24. [Google Scholar] [CrossRef]
- Hansen, D.W.; Roholm, L.; Ferreiros, I.G. Robust glint detection through homography normalization. In Proceedings of the Symposium on Eye Tracking Research and Applications, Safety Harbor, FL, USA, 26–28 March 2014; pp. 91–94. [Google Scholar]
- Narcizo, F.B.; Dos Santos, F.E.D.; Hansen, D.W. High-accuracy gaze estimation for interpolation-based eye-tracking methods. Vision 2021, 5, 41. [Google Scholar] [CrossRef] [PubMed]
- Das, V.E.; Thomas, C.W.; Zivotofsky, A.Z.; Leigh, R.J. Measuring eye movements during locomotion: Filtering techniques for obtaining velocity signals from a video-based eye monitor. J. Vestib. Res. 1996, 6, 455–461. [Google Scholar] [CrossRef] [PubMed]
- Hecht, R.M.; Hillel, A.B.; Telpaz, A.; Tsimhoni, O.; Tishby, N. Information constrained control analysis of eye gaze distribution under workload. IEEE Trans. Hum.-Mach. Syst. 2019, 49, 474–484. [Google Scholar] [CrossRef]
- Borji, A.; Itti, L. State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 185–207. [Google Scholar] [CrossRef] [PubMed]
- Tractinsky, N. Visual aesthetics. In The Encyclopedia of Human-Computer Interaction, 2nd ed.; Interaction Design Foundation: Aarhus, Denmark, 2014; Available online: https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/visual-aesthetics (accessed on 15 January 2025).
- Martinez-Marquez, D.; Pingali, S.; Panuwatwanich, K.; Stewart, R.A.; Mohamed, S. Application of eye tracking technology in aviation, maritime, and construction industries: A systematic review. Sensors 2021, 21, 4289. [Google Scholar] [CrossRef] [PubMed]
- Krassanakis, V.; Cybulski, P. Eye tracking research in cartography: Looking into the future. ISPRS Int. J. Geo-Inf. 2021, 10, 411. [Google Scholar] [CrossRef]
- Xu, H.; Yu, S.; Jin, S.; Sun, R.; Chen, G.; Sun, L. Enhancing robustness in asynchronous feature tracking for event cameras through fusing frame steams. Complex Intell. Syst. 2024, 10, 6885–6899. [Google Scholar] [CrossRef]
- Harezlak, K.; Kasprowski, P.; Stasch, M. Towards accurate eye tracker calibration–methods and procedures. Procedia Comput. Sci. 2014, 35, 1073–1081. [Google Scholar] [CrossRef]
- Komogortsev, O.V.; Karpov, A. Automated classification and scoring of smooth pursuit eye movements in the presence of fixations and saccades. Behav. Res. Methods 2013, 45, 203–215. [Google Scholar] [CrossRef] [PubMed]
- Kang, Z.; Mandal, S.; Crutchfield, J.; Millan, A.; McClung, S.N. Designs and algorithms to map eye tracking data with dynamic multielement moving objects. Comput. Intell. Neurosci. 2016, 2016, 9354760. [Google Scholar] [CrossRef] [PubMed]
- Kübler, T.C.; Bukenberger, D.R.; Ungewiss, J.; Wörner, A.; Rothe, C.; Schiefer, U.; Rosenstiel, W.; Kasneci, E. Towards automated comparison of eye-tracking recordings in dynamic scenes. In Proceedings of the 5th European Workshop on Visual Information Processing (EUVIP), Paris, France, 10–12 December 2014; pp. 1–6. [Google Scholar]
- Loyola, P.; Velásquez, J.D. Characterizing web user visual gaze patterns: A graph theory inspired approach. In Proceedings of the Brain Informatics and Health: International Conference, BIH 2014, Warsaw, Poland, 11–14 August 2014; Springer: Cham, Switzerland, 2014; pp. 586–594. [Google Scholar]
- Rim, N.W.; Choe, K.W.; Scrivner, C.; Berman, M.G. Introducing Point-of-Interest as an alternative to Area-of-Interest for fixation duration analysis. PLoS ONE 2021, 16, e0250170. [Google Scholar] [CrossRef] [PubMed]
- Constantinides, A.; Belk, M.; Fidas, C.; Pitsillides, A. An eye gaze-driven metric for estimating the strength of graphical passwords based on image hotspots. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; pp. 33–37. [Google Scholar]
- Alexander, R.G.; Nahvi, R.J.; Zelinsky, G.J. Specifying the precision of guiding features for visual search. J. Exp. Psychol. Hum. Percept. Perform. 2019, 45, 1248–1264. [Google Scholar] [CrossRef] [PubMed]
- Pietruski, P. Reply: Analysis of the visual perception of female breast aesthetics and symmetry: An eye-tracking study. Plast. Reconstr. Surg. 2020, 146, 502e. [Google Scholar] [CrossRef] [PubMed]
Symbol | Definition |
---|---|
LT | Coordinate of the top-left corner of the stimulus image |
RT | Coordinate of the top-right corner of the stimulus image |
LB | Coordinate of the bottom-left corner of the stimulus image |
k | Silhouette | WCSS |
---|---|---|
2 | 0.497 | 5334.41 |
3 | 0.463 | 3719.29 |
4 | 0.455 | 2855.24 |
5 | 0.432 | 2280.45 |
6 | 0.418 | 1890.55 |
7 | 0.402 | 1609.02 |
AOI Region | Fixation Points | EP Area (px2) | CH Area (px2) |
---|---|---|---|
Region 0 | 50 | 7441.0 | 5580.8 |
Region 1 | 35 | 13,137.5 | 9853.1 |
Region 2 | 25 | 3775.1 | 2831.3 |
Region 3 | 60 | 10,434.8 | 7826.1 |
Region 4 | 40 | 5616.5 | 4212.4 |
Average | 42 | 8080.8 | 6060.7 |
Phase | Utterance (s) | Semantic Category | Time Window | Gaze Summary |
---|---|---|---|---|
1 | 1 | Affective | 00:00:07–00:00:10 | evaluative |
2 | 5, 6 | Obj.ID + Spatial | 00:00:21–00:00:27 | boat focus |
3 | 10 | Obj.ID + Spatial | 00:00:36–00:00:40 | bamboo area |
4 | 11, 12 | Obj.ID + Spatial | 00:00:41–00:00:47 | tree area |
5 | 15 | Obj.ID + Spatial | 00:00:54–00:00:57 | water |
6 | 17 | Affective | 00:01:10–00:01:13 | scene impression |
7 | 18 | Affective | 00:01:17–00:01:19 | calmness |
8 | 19 | Object ID | 00:01:21–00:01:22 | riverside |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, K.-C.; Lee, C.-F.; Chang, T.-W.; Wang, C.-G.; Li, J.-R. From Viewing to Structure: A Computational Framework for Modeling and Visualizing Visual Exploration. Appl. Sci. 2025, 15, 7900. https://doi.org/10.3390/app15147900
Chen K-C, Lee C-F, Chang T-W, Wang C-G, Li J-R. From Viewing to Structure: A Computational Framework for Modeling and Visualizing Visual Exploration. Applied Sciences. 2025; 15(14):7900. https://doi.org/10.3390/app15147900
Chicago/Turabian StyleChen, Kuan-Chen, Chang-Franw Lee, Teng-Wen Chang, Cheng-Gang Wang, and Jia-Rong Li. 2025. "From Viewing to Structure: A Computational Framework for Modeling and Visualizing Visual Exploration" Applied Sciences 15, no. 14: 7900. https://doi.org/10.3390/app15147900
APA StyleChen, K.-C., Lee, C.-F., Chang, T.-W., Wang, C.-G., & Li, J.-R. (2025). From Viewing to Structure: A Computational Framework for Modeling and Visualizing Visual Exploration. Applied Sciences, 15(14), 7900. https://doi.org/10.3390/app15147900