Analyzing Visual Attention in Virtual Crime Scene Investigations Using Eye-Tracking and VR: Insights for Cognitive Modeling
Abstract
1. Introduction
1.1. Human-Robot Interaction
1.2. Robot Training
1.3. Eye-Tracking Technology and Applications
1.4. Machine Learning Applications in Eye-Tracking Research
1.5. Research Objectives and Hypotheses
2. Methods
2.1. Simulated Crime Scene Setup and Data Collection
2.2. Dataset Description and Preprocessing
2.2.1. Source of the Dataset
2.2.2. Composition of the Dataset
2.2.3. Preprocessing Steps and Data Cleaning
2.3. Analytical Framework
2.3.1. Regional Differences Analysis
2.3.2. Proficiency Prediction
2.3.3. Evaluation Metrics
3. Results
3.1. Regional Difference Analysis Results
3.1.1. Clustering Outcome and Group Separation
3.1.2. Feature Importance Analysis and Key Findings
3.2. Proficiency Prediction Results
3.2.1. Machine Learning Models Applied for Proficiency Prediction
3.2.2. Evaluation Metrics and Performance Comparison
4. Conclusions
4.1. Statistical Result
4.2. Implications of the Results
4.3. Limitations and Potential Improvements
4.4. Implications for Robotic Perception and Design
4.5. Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Committee on Identifying the Needs of the Forensic Sciences Community, National Research Council. Strengthening Forensic Science in the United States: A Path Forward; National Academies Press: Washington, DC, USA, 2009. [Google Scholar]
- Burdea, G.C.; Coiffet, P. Virtual Reality Technology; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
- Holmqvist, K.; Nyström, M.; Andersson, R.; Dewhurst, R.; Jarodzka, H.; Van de Weijer, J. Eye Tracking: A Comprehensive Guide to Methods and Measures; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
- Bethel, C.L.; Salomon, K.; Murphy, R.R.; Burke, J.L. Survey of psychophysiology measurements applied to human-robot interaction. In Proceedings of the RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Republic of Korea, 26–29 August 2007; IEEE: New York, NY, USA, 2007; pp. 732–737. [Google Scholar]
- Yan, H.; Ang, M.H.; Poo, A.N. A survey on perception methods for human–robot interaction in social robots. Int. J. Soc. Robot. 2014, 6, 85–119. [Google Scholar] [CrossRef]
- Tezza, D.; Andujar, M. The state-of-the-art of human–drone interaction: A survey. IEEE Access 2019, 7, 167438–167454. [Google Scholar] [CrossRef]
- Mi, Z.Q.; Yang, Y. Human-robot interaction in UVs swarming: A survey. Int. J. Comput. Sci. Issues 2013, 10, 273. [Google Scholar]
- Suma, V. Computer vision for human-machine interaction-review. J. Trends Comput. Sci. Smart Technol. 2019, 1, 131–139. [Google Scholar] [CrossRef]
- Jaimes, A.; Sebe, N. Multimodal human–computer interaction: A survey. Comput. Vis. Image Underst. 2007, 108, 116–134. [Google Scholar] [CrossRef]
- Liu, H.; Wang, L. Gesture recognition for human-robot collaboration: A review. Int. J. Ind. Ergon. 2018, 68, 355–367. [Google Scholar] [CrossRef]
- Qian, K.; Hu, C. Visually gesture recognition for an interactive robot grasping application. Int. J. Multimed. Ubiquitous Eng. 2013, 8, 189–196. [Google Scholar]
- Xia, Z.; Lei, Q.; Yang, Y.; Zhang, H.; He, Y.; Wang, W.; Huang, M. Vision-based hand gesture recognition for human-robot collaboration: A survey. In Proceedings of the 2019 5th International Conference on Control, Automation and Robotics (ICCAR), Beijing, China, 19–22 April 2019; IEEE: New York, NY, USA, 2019; pp. 198–205. [Google Scholar]
- Moeslund, T.B.; Granum, E. A survey of computer vision-based human motion capture. Comput. Vis. Image Underst. 2001, 81, 231–268. [Google Scholar] [CrossRef]
- Chen, S.Y. Kalman filter for robot vision: A survey. IEEE Trans. Ind. Electron. 2011, 59, 4409–4420. [Google Scholar] [CrossRef]
- Robinson, N.; Tidd, B.; Campbell, D.; Kulić, D.; Corke, P. Robotic vision for human-robot interaction and collaboration: A survey and systematic review. ACM Trans. -Hum.-Robot. Interact. 2023, 12, 1–66. [Google Scholar] [CrossRef]
- Schmidt, T.; Fox, D. Self-directed lifelong learning for robot vision. In Proceedings of the Robotics Research: The 18th International Symposium ISRR, Puerto Varas, Chile, 11–14 December 2017; Springer: Berlin/Heidelberg, Germany, 2019; pp. 109–114. [Google Scholar]
- Liu, Y.c.; Dai, Q.h. A survey of computer vision applied in aerial robotic vehicles. In Proceedings of the 2010 International Conference on Optics, Photonics and Energy Engineering (OPEE), Wuhan, China, 10–11 May 2010; IEEE: New York, NY, USA, 2010; Volume 1, pp. 277–280. [Google Scholar]
- Chen, S.; Li, Y.; Kwok, N.M. Active vision in robotic systems: A survey of recent developments. Int. J. Robot. Res. 2011, 30, 1343–1377. [Google Scholar] [CrossRef]
- Radosavovic, I.; Shi, B.; Fu, L.; Goldberg, K.; Darrell, T.; Malik, J. Robot learning with sensorimotor pre-training. In Proceedings of the Conference on Robot Learning, Atlanta, GA, USA, 6–9 November 2023; PMLR: Cambridge, MA, USA, 2023; pp. 683–693. [Google Scholar]
- Wang, C.; Hasler, S.; Tanneberg, D.; Ocker, F.; Joublin, F.; Ceravola, A.; Deigmoeller, J.; Gienger, M. Lami: Large language models for multi-modal human-robot interaction. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–10. [Google Scholar]
- Gao, Y.; Chang, Y.; Yang, T.; Yu, Z. Consumer acceptance of social robots in domestic settings: A human-robot interaction perspective. J. Retail. Consum. Serv. 2025, 82, 104075. [Google Scholar] [CrossRef]
- Obrenovic, B.; Gu, X.; Wang, G.; Godinic, D.; Jakhongirov, I. Generative AI and human–robot interaction: Implications and future agenda for business, society and ethics. AI Soc. 2024, 40, 677–690. [Google Scholar] [CrossRef]
- Yen-Chen, L.; Zeng, A.; Song, S.; Isola, P.; Lin, T.Y. Learning to see before learning to act: Visual pre-training for manipulation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: New York, NY, USA, 2020; pp. 7286–7293. [Google Scholar]
- Shridhar, M.; Manuelli, L.; Fox, D. Cliport: What and where pathways for robotic manipulation. In Proceedings of the Conference on Robot Learning, Auckland, New Zealand, 14–18 December 2022; PMLR: Cambridge, MA, USA, 2022; pp. 894–906. [Google Scholar]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; PMLR: Cambridge, MA, USA, 2021; pp. 8748–8763. [Google Scholar]
- Nair, S.; Rajeswaran, A.; Kumar, V.; Finn, C.; Gupta, A. R3m: A universal visual representation for robot manipulation. arXiv 2022, arXiv:2203.12601. [Google Scholar] [CrossRef]
- Agrawal, P.; Nair, A.V.; Abbeel, P.; Malik, J.; Levine, S. Learning to poke by poking: Experiential learning of intuitive physics. Adv. Neural Inf. Process. Syst. 2016, 29, 5092–5100. [Google Scholar]
- Pinto, L.; Gandhi, D.; Han, Y.; Park, Y.L.; Gupta, A. The curious robot: Learning visual representations via physical interactions. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part II 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 3–18. [Google Scholar]
- Pinto, L.; Gupta, A. Supersizing self-supervision: Learning to grasp from 50 k tries and 700 robot hours. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; IEEE: New York, NY, USA, 2016; pp. 3406–3413. [Google Scholar]
- Levine, S.; Pastor, P.; Krizhevsky, A.; Ibarz, J.; Quillen, D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 2018, 37, 421–436. [Google Scholar] [CrossRef]
- Finn, C.; Tan, X.Y.; Duan, Y.; Darrell, T.; Levine, S.; Abbeel, P. Deep spatial autoencoders for visuomotor learning. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; IEEE: New York, NY, USA, 2016; pp. 512–519. [Google Scholar]
- Sermanet, P.; Xu, K.; Levine, S. Unsupervised perceptual rewards for imitation learning. arXiv 2016, arXiv:1612.06699. [Google Scholar]
- Sermanet, P.; Lynch, C.; Chebotar, Y.; Hsu, J.; Jang, E.; Schaal, S.; Levine, S.; Brain, G. Time-contrastive networks: Self-supervised learning from video. In Proceedings of the 2018 IEEE international conference on robotics and automation (ICRA), Brisbane, Australia, 21–25 May 2018; IEEE: New York, NY, USA, 2018; pp. 1134–1141. [Google Scholar]
- Florence, P.; Manuelli, L.; Tedrake, R. Self-supervised correspondence in visuomotor policy learning. IEEE Robot. Autom. Lett. 2019, 5, 492–499. [Google Scholar] [CrossRef]
- Pari, J.; Shafiullah, N.M.; Arunachalam, S.P.; Pinto, L. The surprising effectiveness of representation learning for visual imitation. arXiv 2021, arXiv:2112.01511. [Google Scholar] [CrossRef]
- Zhan, A.; Zhao, R.; Pinto, L.; Abbeel, P.; Laskin, M. A framework for efficient robotic manipulation. In Proceedings of the Deep RL Workshop NeurIPS 2021, Virtual, 13 December 2021. [Google Scholar]
- Levine, S.; Finn, C.; Darrell, T.; Abbeel, P. End-to-end training of deep visuomotor policies. J. Mach. Learn. Res. 2016, 17, 1–40. [Google Scholar]
- James, S.; Davison, A.J.; Johns, E. Transferring end-to-end visuomotor control from simulation to real world for a multi-stage task. In Proceedings of the Conference on Robot Learning, California, CA, USA, 13–15 November 2017; PMLR: Cambridge, MA, USA, 2017; pp. 334–343. [Google Scholar]
- Andrychowicz, O.M.; Baker, B.; Chociej, M.; Jozefowicz, R.; McGrew, B.; Pachocki, J.; Petron, A.; Plappert, M.; Powell, G.; Ray, A.; et al. Learning dexterous in-hand manipulation. Int. J. Robot. Res. 2020, 39, 3–20. [Google Scholar] [CrossRef]
- Zhan, A.; Zhao, R.; Pinto, L.; Abbeel, P.; Laskin, M. Learning Visual Robotic Control Efficiently with Contrastive Pre-training and Data Augmentation. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 4040–4047. [Google Scholar] [CrossRef]
- Miller, A.; Knoop, S.; Christensen, H.; Allen, P. Automatic grasp planning using shape primitives. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), Taipei, Taiwan, 14–19 September 2003; Volume 2, pp. 1824–1829. [Google Scholar] [CrossRef]
- Kuffner, J.; LaValle, S. RRT-connect: An efficient approach to single-query path planning. In Proceedings of the Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), San Francisco, CA, USA, 24–28 April 2000; Volume 2, pp. 995–1001. [Google Scholar] [CrossRef]
- Zeng, A.; Song, S.; Welker, S.; Lee, J.; Rodriguez, A.; Funkhouser, T. Learning Synergies Between Pushing and Grasping with Self-Supervised Deep Reinforcement Learning. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4238–4245. [Google Scholar] [CrossRef]
- Zeng, A.; Florence, P.; Tompson, J.; Welker, S.; Chien, J.; Attarian, M.; Armstrong, T.; Krasin, I.; Duong, D.; Sindhwani, V.; et al. Transporter Networks: Rearranging the Visual World for Robotic Manipulation. In Proceedings of the 2020 Conference on Robot Learning, Cambridge, CA, USA, 16–18 November 2020; Kober, J., Ramos, F., Tomlin, C., Eds.; PMLR: Cambridge, MA, USA, 2020. Proceedings of Machine Learning Research. Volume 155, pp. 726–747. [Google Scholar]
- James, S.; Davison, A.J. Q-Attention: Enabling Efficient Learning for Vision-Based Robotic Manipulation. IEEE Robot. Autom. Lett. 2022, 7, 1612–1619. [Google Scholar] [CrossRef]
- Wang, C.; Belardinelli, A.; Hasler, S.; Stouraitis, T.; Tanneberg, D.; Gienger, M. Explainable human-robot training and cooperation with augmented reality. In Proceedings of the Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–5. [Google Scholar]
- Barredo Arrieta, A.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
- Gunning, D.; Stefik, M.; Choi, J.; Miller, T.; Stumpf, S.; Yang, G.Z. XAI—Explainable artificial intelligence. Sci. Robot. 2019, 4, eaay7120. [Google Scholar] [CrossRef]
- Wang, C.; An, P. Explainability via Interactivity? Supporting Nonexperts’ Sensemaking of pre-trained CNN by Interacting with Their Daily Surroundings. In Proceedings of the Extended Abstracts of the 2021 Annual Symposium on Computer-Human Interaction in Play, Virtual, 18–21 October 2021; CHI PLAY ’21. pp. 274–279. [Google Scholar] [CrossRef]
- Wang, C.; An, P. A Mobile Tool that Helps Nonexperts Make Sense of Pretrained CNN by Interacting with Their Daily Surroundings. In Proceedings of the Adjunct Publication of the 23rd International Conference on Mobile Human-Computer Interaction, Virtual, 27 September–1 October 2021. MobileHCI ’21. [Google Scholar] [CrossRef]
- Krupinski, E.A.; Tillack, A.A.; Richter, L.; Henderson, J.T.; Bhattacharyya, A.K.; Scott, K.M.; Graham, A.R.; Descour, M.R.; Davis, J.R.; Weinstein, R.S. Eye-movement study and human performance using telepathology virtual slides. Implications for medical education and differences with experience. Hum. Pathol. 2006, 37, 1543–1556. [Google Scholar] [CrossRef]
- Duchowski, A.T. A breadth-first survey of eye-tracking applications. Behav. Res. Methods Instruments Comput. 2002, 34, 455–470. [Google Scholar] [CrossRef]
- Liebermann, D.G.; Katz, L.; Hughes, M.D.; Bartlett, R.M.; McClements, J.; Franks, I.M. Advances in the application of information technology to sport performance. J. Sport. Sci. 2002, 20, 755–769. [Google Scholar] [CrossRef] [PubMed]
- Vansteenkiste, P.; Cardon, G.; Philippaerts, R.; Lenoir, M. Measuring dwell time percentage from head-mounted eye-tracking data—Comparison of a frame-by-frame and a fixation-by-fixation analysis. Ergonomics 2015, 58, 712–721. [Google Scholar] [CrossRef] [PubMed]
- Tien, T.; Pucher, P.H.; Sodergren, M.H.; Sriskandarajah, K.; Yang, G.Z.; Darzi, A. Differences in gaze behaviour of expert and junior surgeons performing open inguinal hernia repair. Surg. Endosc. 2015, 29, 405–413. [Google Scholar] [CrossRef]
- Wilson, M.; McGrath, J.; Vine, S.; Brewer, J.; Defriend, D.; Masters, R. Psychomotor control in a virtual laparoscopic surgery training environment: Gaze control parameters differentiate novices from experts. Surg. Endosc. 2010, 24, 2458–2464. [Google Scholar] [CrossRef]
- Chang, R.C.; Tsai, M.J. Visual behavior patterns of successful decision makers in crime scene photo investigation: An eye tracking analysis. J. Forensic Sci. 2022, 67, 1072–1083. [Google Scholar] [CrossRef]
- Shiomi, H.; Notsu, M.; Ota, T.; Takai, Y.; Goto, A.; Hamada, H. Influence of Proficiency on Eye Movement of the Surgeon for Laparoscopic Cholecystectomy. In Proceedings of the Digital Human Modeling. Applications in Health, Safety, Ergonomics and Risk Management: Ergonomics and Health, Los Angeles, CA, USA, 2–7 August 2015; Duffy, V.G., Ed.; Springer: Cham, Switzerland, 2015; pp. 367–373. [Google Scholar]
- Dyer, A.G.; Found, B.; Rogers, D. Visual Attention and Expertise for Forensic Signature Analysis. J. Forensic Sci. 2006, 51, 1397–1404. [Google Scholar] [CrossRef]
- Busey, T.; Yu, C.; Wyatte, D.; Vanderkolk, J.; Parada, F.; Akavipat, R. Consistency and variability among latent print examiners as revealed by eye tracking methodologies. J. Forensic Identif. 2011, 61, 60–91. [Google Scholar]
- Watalingam, R.D.; Richetelli, N.; Pelz, J.B.; Speir, J.A. Eye tracking to evaluate evidence recognition in crime scene investigations. Forensic Sci. Int. 2017, 280, 64–80. [Google Scholar] [CrossRef]
- Crosby, F.; Hermens, F. Does it look safe? An eye tracking study into the visual aspects of fear of crime. Q. J. Exp. Psychol. 2019, 72, 599–615. [Google Scholar] [CrossRef] [PubMed]
- Marano, D.; Cammarata, A.; Fichera, G.; Sinatra, R.; Prati, D. Modeling of a three-axes MEMS gyroscope with feedforward PI quadrature compensation. In Proceedings of the Advances on Mechanics, Design Engineering and Manufacturing: Proceedings of the International Joint Conference on Mechanics, Design Engineering & Advanced Manufacturing (JCM 2016), Catania, Italy, 14–16 September 2016; Springer: Cham, Switzerland, 2016; pp. 71–80. [Google Scholar]
- Ahmed, I.A.; Senan, E.M.; Rassem, T.H.; Ali, M.A.H.; Shatnawi, H.S.A.; Alwazer, S.M.; Alshahrani, M. Eye Tracking-Based Diagnosis and Early Detection of Autism Spectrum Disorder Using Machine Learning and Deep Learning Techniques. Electronics 2022, 11, 530. [Google Scholar] [CrossRef]
- Wang, B.; Pan, H.; Aboah, A.; Zhang, Z.; Keles, E.; Torigian, D.; Turkbey, B.; Krupinski, E.; Udupa, J.; Bagci, U. GazeGNN: A Gaze-Guided Graph Neural Network for Chest X-Ray Classification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2024; pp. 2194–2203. [Google Scholar]
- Pershin, I.; Mustafaev, T.; Ibragimov, B. Contrastive Learning Approach to Predict Radiologist’s Error Based on Gaze Data. In Proceedings of the 2023 IEEE Congress on Evolutionary Computation (CEC), Chicago, IL, USA, 1–5 July 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Luís, A.; Hsieh, C.; Nobre, I.B.; Sousa, S.C.; Maciel, A.; Jorge, J.; Moreira, C. Integrating Eye-Gaze Data into CXR DL Approaches: A Preliminary study. In Proceedings of the 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Shanghai, China, 25–29 March 2023; pp. 196–199. [Google Scholar] [CrossRef]
- Pershin, I.; Mustafaev, T.; Ibragimova, D.; Ibragimov, B. Changes in radiologists’ gaze patterns against lung x-rays with different abnormalities: A randomized experiment. J. Digit. Imaging 2023, 36, 767–775. [Google Scholar] [CrossRef]
- Savochkina, E.; Lee, L.H.; Zhao, H.; Drukker, L.; Papageorghiou, A.T.; Alison Noble, J. First Trimester Video Saliency Prediction Using Clstmu-Net with Stochastic Augmentation. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; pp. 1–4. [Google Scholar] [CrossRef]
- Sharma, C.; Singh, H.; Orihuela-Espina, F.; Darzi, A.; Sodergren, M.H. Visual gaze patterns reveal surgeons’ ability to identify risk of bile duct injury during laparoscopic cholecystectomy. HPB 2021, 23, 715–722. [Google Scholar] [CrossRef]
- Stember, J.N.; Celik, H.; Krupinski, E.; Chang, P.D.; Mutasa, S.; Wood, B.J.; Lignelli, A.; Moonis, G.; Schwartz, L.H.; Jambawalikar, S.; et al. Eye tracking for deep learning segmentation using convolutional neural networks. J. Digit. Imaging 2019, 32, 597–604. [Google Scholar] [CrossRef] [PubMed]
- Mariam, K.; Afzal, O.M.; Hussain, W.; Javed, M.U.; Kiyani, A.; Rajpoot, N.; Khurram, S.A.; Khan, H.A. On Smart Gaze Based Annotation of Histopathology Images for Training of Deep Convolutional Neural Networks. IEEE J. Biomed. Health Informatics 2022, 26, 3025–3036. [Google Scholar] [CrossRef] [PubMed]
- Hosp, B.; Yin, M.S.; Haddawy, P.; Watcharopas, R.; Sa-ngasoongsong, P.; Kasneci, E. Differentiating Surgeons’ Expertise solely by Eye Movement Features. In Proceedings of the ICMI ’21 Companion: Companion Publication of the 2021 International Conference on Multimodal Interaction, Montreal, QC, Canada, 18–22 October 2021; ICMI ’21 Companion. pp. 371–375. [Google Scholar] [CrossRef]
- Jiang, H.; Gao, M.; Huang, J.; Tang, C.; Zhang, X.; Liu, J. DCAMIL: Eye-tracking guided dual-cross-attention multi-instance learning for refining fundus disease detection. Expert Syst. Appl. 2024, 243, 122889. [Google Scholar] [CrossRef]
- Arnold, L.; Aryal, S.; Hong, B.; Nitharsan, M.; Shah, A.; Ahmed, W.; Lilani, Z.; Su, W.; Piaggio, D. A Systematic Literature Review of Eye-Tracking and Machine Learning Methods for Improving Productivity and Reading Abilities. Appl. Sci. 2025, 15, 3308. [Google Scholar] [CrossRef]
- Xu, Y.; Zhang, C.; Pan, B.; Yuan, Q.; Zhang, X. A portable and efficient dementia screening tool using eye tracking machine learning and virtual reality. NPJ Digit. Med. 2024, 7, 219. [Google Scholar] [CrossRef]
- Needleman, S.B.; Wunsch, C.D. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J. Mol. Biol. 1970, 48, 443–453. [Google Scholar] [CrossRef] [PubMed]
- Rubner, Y.; Tomasi, C.; Guibas, L. A metric for distributions with applications to image databases. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), Bombay, India, 7 January 1998; pp. 59–66. [Google Scholar] [CrossRef]
Regions | Predicted 1 | Predicted 2 |
---|---|---|
True 1 (Singapore) | 14 | 9 |
True 2 (Taiwan) | 1 | 22 |
ML Models | MSE | |
---|---|---|
Linear Regression | 0.0066 | 0.1757 |
Polynomial Regression | 0.0161 | −1.0068 |
Random Forest | 0.0024 | 0.7034 |
Decision Tree | 0.0029 | 0.6379 |
SVR | 0.0057 | 0.2909 |
Classes | Average Accuracy | C | Kernels | 2, 3, and 4 Factors | |
---|---|---|---|---|---|
3 | 0.7250 | 10 | scale | RBF | 4 |
3 w/squared terms | 0.7250 | 10 | scale | RBF | 4 |
4 | 0.5550 | 10 | auto | RBF | 4 |
4 w/squared terms | 0.5900 | 1 | auto | RBF | 4 |
Detailed 10-Fold Scores for Best Combination (3-Class) | |||||||||||
Fold # | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Average |
Accuracy | 0.60 | 0.80 | 0.60 | 0.40 | 0.60 | 1.00 | 0.50 | 0.75 | 1.00 | 1.00 | 0.7250 |
Detailed 10-Fold Scores for Best Combination (4-Class) | |||||||||||
Fold # | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Average |
Accuracy | 0.40 | 0.60 | 0.60 | 0.20 | 0.60 | 1.00 | 0.75 | 0.75 | 0.75 | 0.25 | 0.5900 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, W.-C.; Shih, C.-H.; Jiang, J.; Pallas Enguita, S.; Chen, C.-H. Analyzing Visual Attention in Virtual Crime Scene Investigations Using Eye-Tracking and VR: Insights for Cognitive Modeling. Electronics 2025, 14, 3265. https://doi.org/10.3390/electronics14163265
Yang W-C, Shih C-H, Jiang J, Pallas Enguita S, Chen C-H. Analyzing Visual Attention in Virtual Crime Scene Investigations Using Eye-Tracking and VR: Insights for Cognitive Modeling. Electronics. 2025; 14(16):3265. https://doi.org/10.3390/electronics14163265
Chicago/Turabian StyleYang, Wen-Chao, Chih-Hung Shih, Jiajun Jiang, Sergio Pallas Enguita, and Chung-Hao Chen. 2025. "Analyzing Visual Attention in Virtual Crime Scene Investigations Using Eye-Tracking and VR: Insights for Cognitive Modeling" Electronics 14, no. 16: 3265. https://doi.org/10.3390/electronics14163265
APA StyleYang, W.-C., Shih, C.-H., Jiang, J., Pallas Enguita, S., & Chen, C.-H. (2025). Analyzing Visual Attention in Virtual Crime Scene Investigations Using Eye-Tracking and VR: Insights for Cognitive Modeling. Electronics, 14(16), 3265. https://doi.org/10.3390/electronics14163265