Next Article in Journal
Ontological Semantic Annotation of an English Corpus Through Condition Random Fields
Previous Article in Journal
Dynamic Fault-Tolerant Workflow Scheduling with Hybrid Spatial-Temporal Re-Execution in Clouds
Article Menu
Issue 5 (May) cover image

Export Article

Information 2019, 10(5), 170; https://doi.org/10.3390/info10050170

Article
Research on the Quantitative Method of Cognitive Loading in a Virtual Reality System
Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang 550003, China
*
Author to whom correspondence should be addressed.
Received: 20 March 2019 / Accepted: 4 May 2019 / Published: 8 May 2019

Abstract

:
Aimed at the problem of how to objectively obtain the threshold of a user’s cognitive load in a virtual reality interactive system, a method for user cognitive load quantification based on an eye movement experiment is proposed. Eye movement data were collected in the virtual reality interaction process by using an eye movement instrument. Taking the number of fixation points, the average fixation duration, the average saccade length, and the number of the first mouse clicking fixation points as the independent variables, and the number of backward-looking times and the value of user cognitive load as the dependent variables, a cognitive load evaluation model was established based on the probabilistic neural network. The model was validated by using eye movement data and subjective cognitive load data. The results show that the absolute error and relative mean square error were 6.52%–16.01% and 6.64%–23.21%, respectively. Therefore, the model is feasible.
Keywords:
cognitive load; eye movement experiment; virtual reality

1. Introduction

A cognitive load is the ratio of task complexity to the cognitive ability required by the user to complete the task, which can be described as the limited capacity of working memory and attention [1]. The cognitive load has a tremendous impact on the user’s ability to execute tasks, which is an important humanistic factor directly related to the efficiency of the system operation, job safety, and production efficiency in different fields [2]. In the in-vehicle information system (IVIS), the complex and indiscriminate provision of multiple large sets of data may trigger the cognitive load of drivers, resulting in operational errors and traffic accidents [3]. Therefore, researchers have been conducting quantitative research on the cognitive load, mainly measuring the working memory capacity and selective attention mechanism changes in two stages [1,2,4,5]. Physiological signals (such as heart rate and respiratory rate), brain activity, blood pressure, skin electrical response, pupil diameter, blinking, and gaze are considered biomarkers for quantifying the cognitive load [6,7]. There is an information structure that can effectively quantify the cognitive load in Web browsing and Web shopping, minimize the user’s information browsing time, or define the optimal point in time to guide the purchase [8].
Differences in individual cognitive ability and how to enhance the cognitive load affect human cognitive control, which leads to different discoveries of physiological changes as the cognitive load [9], and eye movement technology can objectively measure the cognition of users [10]. The pupil measure is the cognitive activity index (ICA), which assesses the association between expected eye movements and immediate cognitive load [11,12]. The analysis of eye tracking data provides quantitative evidence for the change of the interface layout and its effect on the user’s understanding and cognitive load [13]. Many researchers use eye movement behavior data [14,15,16] to obtain the user’s behavior habits and interest difference to judge the user’s cognitive load. Among them, Asan et al. [17] studied the physiological index associated with the eye movement tracking technology and cognitive load. These studies have focused on the use of physiological methods to assess the cognitive load of users but have not yet resolved how to construct a quantitative relationship between physiological indicators and cognitive load.
In addition to analyzing the impact of users’ physiological indicators on cognitive load, some researchers have also used machine learning to predict the quantitative cognitive load. The K-NN (k-NearestNeighbor) algorithm has been used to calculate the cognitive load of the user based on, for example, a change in the blood oxygen content of the prefrontal lobe [18,19]. Other studies have shown that both artificial neural networks [20] and classifiers based on linear discriminant analysis [21] can monitor the workload of the EEG (Electroencephalogram) power spectrum in real time. In addition, artificial neural networks [22], aggregation methods [23], and similar approaches have been applied to the data collection of psychophysiological indicators to predict the cognitive load of users.
The main research purpose of this paper was to obtain objective and accurate user cognitive load values in the virtual reality (VR) interactive system. The eye movement test was used, where the number of fixation points of the user was obtained by the eye movement instrument. Additionally, the average fixation duration, average saccade length, the number of fixation points clicked by the first mouse, and the number of backward-looking times were used as the evaluation indexes. A cognitive load evaluation model was then constructed based on the probabilistic neural network, which quantifies the cognitive load and provides a theoretical basis for the design and development of the subsequent virtual reality interactive system.

2. Related Work

2.1. Multi-Channel Interactive Information Integration in the VR System

To solve the problem that it is difficult to quantify the cognitive load of users in a virtual reality interactive system, in order to reduce the difficulty of interactive cognitive analysis, some researchers have constructed a multi-modal cognitive processing model that integrates touch, hearing, and vision [24]. In order to improve the naturalness and efficiency of interaction, some researchers have also established a multi-modal conceptual model and a system model of human–computer interaction based on the elements of human–computer interaction in command and control [25]. By simulating the process of human brain cognition, this paper studies the interactive behavior of a virtual reality system from cognitive and computational perspectives, and then constructs the interactive information integration model of virtual reality, and the final output value is the cognitive load value of users, such that the cognitive load can be quantified. As shown in Figure 1, in order to realize the functions in the interactive system, users use visual, auditory, and other cognitive channels to analyze the task, and eye movement is studied to collect the user’s eye movement behaviors under single-channel, double-channel, and triple-channel conditions. The user’s cognitive load in the virtual reality system can then be quantified.

2.2. Construction of Cognitive Load Quality Evaluation Model

The evaluation model is generally composed of three layers: the first layer is the basic layer, that is, the evaluation quality characteristics; the second layer is the middle layer, which is further explanation of the first layer, that is, the characteristics of the mass quantum; and the third layer is the measurement index. Based on the hierarchical partition theory of the quality evaluation model, this paper analyzes the attributes of a virtual reality interactive system, takes the size of user cognitive load as the quality characteristics of a virtual reality interactive system quality evaluation model, deduces the quality sub-characteristics, and finally establishes the cognitive load quality evaluation model of the virtual reality interactive system with the eye movement technical index as the measurement index, as shown in Figure 2.

2.3. Physiological Index of Cognitive Load Based on an Eye Movement Experiment

In an eye movement experiment as a method of implicitly obtaining cognitive load, the visual behavior recorded by the eye movement instrument is more intuitive than the operating behavior for reflecting the cognitive awareness of users. As the most widely used cognitive load assessment method, eye movement technology is mainly based on the number of fixation points, average fixation duration, average saccade length, number of fixation points at the first mouse click, number of backward-looking times, and other experimental data [26] in order to objectively and scientifically evaluate the cognitive load of a virtual reality interactive system. Therefore, this paper chooses eye movement technology as the experimental approach to establish the cognitive load evaluation model based on the probabilistic neural network.
1. Number of Fixations
The number of fixation points is proportional to the cognitive load of the virtual reality intersection system. The greater the number of fixation points, the larger the cognitive load is, and vice versa [27,28]. Therefore, the number of fixation points is introduced as a physiological index to measure the cognitive load of households.
2. Mean Fixation Duration
The more information you carry, the longer your eyes stay fixed, and the more cognitive load you have. To some extent, this evaluation index can reflect the cognitive load of users intuitively [28,29,30]. For this reason, the average fixation duration is used as a physiological index to evaluate the cognitive load of users.
3. Average Pan Length
Scanning length is used to calculate the length of the bevel according to the coordinates of the fixation point, which is mainly used to analyze the path [31,32] to be scanned, and thus to analyze the size of the cognitive load of the user.
4. The Number of Fixation Points at the First Mouse Click
Before the first mouse click, the greater the number of the user’s fixation points, the higher the user’s recognition degree, and the smaller the user’s cognitive load [33,34]. This index is inversely proportional to the cognitive load.
5. Number of Back Views
The number of backward-looking views represents the cognitive impairment of the user [35]. The causes of backward-looking include: (1) cognitive bias of the subjects and (2) a big contrast between the cognitive object and the subjects’ mental image symbols. Users need to recognize them repeatedly to establish and construct new mental image symbols.

3. Methods

3.1. Cognitive Load Evaluation Model Based on the Probabilistic Neural Network

Theorem 1.
The user’s cognitive domain is represented by U , and the cognitive domain is composed of cognitive channels C , expressed as:
U = [ C α   C β   C λ   ]
where C α , C β , C λ each represent a kind of cognitive channel, and the cognitive behavior set of users under the comprehensive effect of each cognitive channel is represented as B . Then, the set of cognitive behaviors of the user is:
B = [ b 1   b 2   b 3     b s ]
where b i is the index of the user’s cognitive behavior, 0 < i < s .
Taking the eye movement characteristic parameters in the virtual reality interactive system as the input layer and the cognitive load as the output layer, a cognitive load quantification model is constructed, as shown in Figure 3.
  • Input layer: This refers to eye movement data of the entire virtual reality tunnel rescue mission, such as the number of fixation points, in a single vision channel, dual vision-audio channel, dual vision-tactile channel, and three visual-audio-tactile channels. It also includes average gaze duration, average squint length, number of gaze points to the first mouse click, number of gaze times, etc.
  • Fusion layer: This refers to incorporating the acquired data into the cognitive load quantification model based on the probabilistic neural network for data collation.
  • Output layer: This refers to the value of the final output after the data fusion processing, which is the value of the cognitive load quantified by the tester under a certain conditional channel.
There are y scheme values and s eye movement indicators. The matrix of the eye movement indicator data of each scheme is as follows:
B = [ b 11 b 12 b 1 s b 21 b 22 b 2 s b y 1 b y 2 b y s ]
The eye movement index matrix is B = ( b i j ) y × s . Each column of the matrix represents eye movement indicator data, and each row represents a test value. As the units of each indicator data are different, it is difficult to directly compare the data, so it is necessary to normalize the data of each column, perform linear transformation of the original data, and map the result value to [ 0 1 ] . If the cognitive load value increases with the increase of each set of indicator data, the transfer function is as follows:
b i p = b i p min { b i p | i = 1 , 2 , , s } max { b i p | i = 1 , 2 , , s } min { b i p | i = 1 , 2 , , s }
Conversely, the conversion function is
b i p = max { b i p | i = 1 , 2 , , s } b i p max { b i p | i = 1 , 2 , , s } min { b i p | i = 1 , 2 , , s }
where max is the maximum value of the indicator data, min is the minimum value of the indicator data, p = y s , and the improved matrix B = ( b i j ) y × s is:
B = [ b 11 b 12 b 1 s b 21 b 22 b 2 s b y 1 b y 2 b y s ]
When Z j = [ b 1 j   b 2 j     b y j ] T , Z is the dimensional column vector of y . The goal of this paper is to find an estimation function, Z = Z ( b ) , such that the mean square error represented by:
e r r o r = j = 1 s ( Z j Z j ) 2
is minimized. For a given set of column vectors B = B i T = [ b i 1   b i 2     b i s ] T , Z = Z i = [ b 1 j   b 2 j     b y j ] T . According to the conditional expectation, the estimated function is:
Z ( B ) = Z f ( B , Z ) d Z f ( B , Z ) d Z
where f ( B , Z ) is the joint probability distribution function of ( B , Z ) . The estimate for f ( B , Z ) is:
f ( B , Z ) = 1 ( 2 π ) s + 1 2 σ s + 1 1 y × i = 1 y exp [ ( B B i T ) T ( B B i T ) 2 σ 2 ] exp [ ( Z Z i ) 2 2 σ 2 ]
where σ is the smoothing parameter; s is the dimension of B , that is, s kinds of eye movement index parameters are selected; and y is the number of samples, that is, the number of schemes. Then:
D i 2 = ( B B i T ) T ( B B i T )
where the physical meaning of D i is the distance from each input eye movement index to the sample point i , which is the Euclidean distance. Here, σ = max { D i | i = 1 , 2 , , y } y . Substituting f ( B , Z ) for f ( B , Z ) in Equation (6), substituting in Equation (8), and exchanging the order of summation and integral number, this can be simplified to obtain:
Z = i = 1 y Z i exp ( D i 2 2 σ 2 ) i = 1 y exp ( D i 2 2 σ 2 )
C I = E * Z
The data is then normalized so that the cognitive load value is in the range of [ 0 1 ] , and the normalized processing function is as follows:
C I l = C I l min { C I l | l = 1 , , p } max { C I l | l = 1 , , p } min { C I l | l = 1 , , p }
where C I is the final output, the cognitive load value, of which E = [ 1   1   1   1   1 ] and p = y s .

3.2. Evaluation Index

The experimental output error is defined as:
E k = 1 2 ( C I k C I k ) 2
where k denotes the number of cognitive channels, C I k denotes the number of subjective scores for the cognitive load of the virtual reality interactive system under k cognitive channels, and C I k denotes the value calculated by the user cognitive load evaluation model under k cognitive channels.
In this paper, the maximum absolute error E R 1 and the relative mean square error E R 2 are used to evaluate the evaluation effect of the model, and the calculation method is as follows:
E R 1 = max k | C I k C I k C I k | × 100 %
E R 2 = 1 H k = 1 H ( C I k C I k C I k ) 2 × 100 %
where H is the total number of channel classes.

4. Application Instance

4.1. Experimental Design

A VR tunnel emergency rescue system mainly obtains rescue information using a visual reading; the auditory system acquires tunnel rescue information, such as tunnel wind sound, water drops sounds, etc., and obtains rescue information; and the touch sense is initiated by touching the handle to obtain the selected rescue information. This paper is mainly focused on the virtual reality system. The tester wore virtual reality equipment and eye-moving equipment; completed the selection of vehicles by visual, auditory, and tactile systems; selected rescue teams; detected life; opened life channels; and provided rescue channels and other rescues. Based on the VR tunnel emergency rescue system, the main focus was on vision. If the experiment was not completed without the visual channel, this paper only studied the cognitive load under the visual Q v , visual-auditory Q v Q h , visual-tactile Q v Q t , and visual-auditory-tactile Q v Q h Q t channels. The experimental task was carried out in the Key Laboratory of Modern Manufacturing Technology of the Ministry of Education of Guizhou University, China, to keep the environment quiet and the light stable, eliminating all interference experimental factors. The study included a task with four layers of cognitive load, from a single channel to three channels. Specifically, the four tasks were as follows:
  • Visual channel: The sound equipment and handle of the emergency rescue system of the VR tunnel were switched off, and the tester obtained the rescue mission information only through the visual channel to complete the rescue mission.
  • Visual-auditory: The handle of the VR tunnel emergency rescue system was turned off, and the tester obtained rescue mission information through visual and auditory functions to complete the rescue mission.
  • Visual-tactile: The sound equipment of the VR tunnel emergency rescue system was turned off. The tester obtained rescue mission information through visual and tactile sensation and completes the rescue mission.
  • Visual-auditor-tactile: The tester obtained the rescue information through visual reading; the auditory system acquires the tunnel rescue information, such as the tunnel wind sound, the water drops sounds, etc., and obtains the rescue information; the handle was touched to obtain the selected rescue information to complete the rescue task.
For each tester, random numbering was performed, and each tester had a preparation time of 1 min. The tester’s task schedule is shown in Table 1. The experimenter completed the tunnel emergency rescue task through virtual reality equipment, and acquired the eye movement data in the process of completing the task by using the strap-back eye tracker of Xintuo Inki Technology Company. For example, the number of fixations, mean fixation duration, average pan length, number of fixation points at the first mouse click, and number of back-views were obtained. Subjective measurement and self-assessment is widely used as a measure of cognitive load [9,36,37,38], which can detect small changes in cognitive load with a relatively good sensitivity [39]. Therefore, at the end of the experiment, in order to verify the usability of the cognitive load evaluation model based on the probabilistic neural network and reduce the subjective measurement error of cognitive load, all the subjects were required to complete the cognitive load questionnaire immediately after completing the task.

4.2. Select Subjects

Twenty virtual reality game lovers from Guizhou University were selected as subjects, aged between 24 and 30. The subjects were in good health, had no bad habits (smoking, drinking, etc.), were colorless, were weak or color blind, and their eyesight or corrected eyesight was 1.0. Before the experiment, it was confirmed that the participants did not drink alcohol or coffee or other stimulant drinks on the day of the experiment, and they signed the agreement voluntarily under the condition that they were familiar with the “informed consent form.”

4.3. Experimental Device

In the experiment of the Key Laboratory of Modern Manufacturing Technology of Guizhou University, a 29-inch LED screen and a resolution Lenovo computer were used. The emergency rescue mission of the tunnel was completed using China’s HTC VIVE virtual reality device, and eye movement data was acquired through the new Tony Inge’s EyeSo Ee60 telemetry eye tracker.

4.4. Experimental Variables

4.4.1. Independent Variable

As shown in Table 2, the cognitive channel was an independent variable, and the participants completed the emergency rescue task of the VR tunnel with different cognitive channels.

4.4.2. Dependent Variable

In order to verify the rationality of the cognitive load evaluation model and analyze the subjective scores of the cognitive load of different subjects, the cognitive load scores were [ 0 1 ] , with 0 for a low subjective load and 1 for a high subjective load, as shown in Table 3. The result is a subjective evaluation of the cognitive load of the virtual reality interactive system. The participants’ questionnaire is shown in Table 4.
As the number of cognitive channels changed, so does the eye movement index data, as shown in Table 5.

4.4.3. Control Disturbance Variable

In order to avoid repeated experiments and to remember the influence of the VR tunnel emergency rescue system environment and task on the cognitive load supervisor score, each participant could only complete one kind of modal cognitive experiment, such as the one-way to visual cognitive experiment, which was arranged as shown in Table 1.

4.5. Experimental Results

The cognitive load of the emergency rescue system in the VR tunnel in different cognitive channel environments was objectively evaluated. The results are shown in Table 6.
Table 7 shows the data of eye movement indices during the emergency rescue of the VR tunnel under different cognitive channels, which have been normalized.

5. Discussion

5.1. Correlation Analysis of Eye Movement Parameters and Cognitive Load of Users

Users’ cognitive load obtained from a single type of eye movement data was limited and one-sided, which cannot accurately reflect the needs of users’ interests. Therefore, it is necessary to integrate the data and establish a model of users’ cognitive load based on an eye movement experiment. Additionally, it is necessary to analyze the correlation between eye movement data and the cognitive load.
In this paper, the Pearson correlation test was used to test the relationship between eye movement parameters and cognitive load, so as to improve the theoretical premise of the cognitive load evaluation. The results of the correlation analysis were obtained and can be viewed in Table 8.
As can be seen from Table 8, the characteristic parameters of each eye movement index were significantly correlated with the cognitive load of users to varying degrees, and the high correlation between the eye movement index and the cognitive load is demonstrated once again. Average saccade length was more highly correlated with cognitive load than other parameters.

5.2. Model Output Analysis

Comparative analysis of the cognitive load evaluated by the probabilistic neural network model and actual cognitive load is shown in Figure 4, and the fitting degree is high. From Figure 4, it can be seen that the cognitive load evaluation model is close to the actual result, which indicates that the evaluation effect of this model is better.
At the same time, in order to understand the accuracy of the model used, the maximum absolute error and relative mean square error were used to evaluate the model, and the evaluation results are shown in Table 9.
In general, the mean absolute error was 10.7575 % and the mean relative mean square error was 12.7675 % . At the same time, it can be seen from the cognitive load evaluation results of each cognitive channel that the maximum absolute error was 16.01 % , the minimum absolute error was 6.52 % , the maximum relative mean square error was 23 . 21 % , and the minimum relative mean square error was 6 . 64 % . This shows that the cognitive load evaluation model based on the probabilistic neural network had a high precision, and the cognitive load model proposed in this paper had a good reliability and can accurately evaluate the cognitive load value of users under different cognitive channels, so as to effectively improve the design rate of the virtual reality interaction system and the user experience.

6. Conclusions

In this paper, the eye movement behavior of the experimenters in a virtual reality interactive environment was studied, and the cognitive load was calculated using the eye movement index such that the cognitive load could be quantified. Eye movement data were recorded using an eye movement instrument, and the subjective cognitive load of the current interactive system was investigated using a questionnaire. The conclusions are as follows.
Based on the experimenter’s eye movement experiment, the number of fixation points, the average fixation duration, the average saccade length, the number of fixation points clicked during the first time, the number of backward-looking views, and other eye movement data were extracted, the user’s cognitive load quantification model in the virtual reality interactive system was constructed by combining the probabilistic neural network.
From the results of the study, it can be seen that there was a significant correlation between each eye movement characteristic parameter and the cognitive load, which indicates that the eye movement index can directly reflect the cognitive load under the interaction of users, thus providing a basis for the study of cognitive load quantification.
The results show that the absolute error of the user cognitive load based on the probabilistic neural network and the subjective cognitive load value of the tester was 6.52%–16.01%, and the relative mean square error is 6.64%–23.21%, indicating that the method has a high precision.

Author Contributions

Conceptualization, X.X. and J.L.; methodology, X.X.; validation, X.X., J.L., and N.D.; formal analysis, X.X.; investigation, X.X.; resources, X.X.; data curation, X.X; writing—original draft preparation, X.X.; writing—review and editing, X.X.; visualization, N.D.; supervision, J.L.; project administration, J.L.

Funding

This research was supported by the Natural Science Foundation of China (Nos. 51865004, 2014BAH05F01) and the Provincial Project Foundation of Guizhou, China (Nos. [2018]1049, [2016]7467).

Acknowledgments

The authors would like to convey their heartfelt gratefulness to the reviewers and the editor for the valuable suggestions and important comments which greatly helped them to improve the presentation of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mun, S.; Whang, M.; Park, S.; Park, M.C. Effects of mental workload on involuntary attention: A somatosensory ERP study. Neuropsychologia 2017, 106, 7–20. [Google Scholar] [CrossRef]
  2. Puma, S.; Matton, N.; Paubel, P.V.; Raufaste, É.; El-Yagoubi, R. Using theta and alpha band power to assess cognitive workload in multitasking environments. Int. J. Psychophysiol. 2018, 123, 111–120. [Google Scholar] [CrossRef]
  3. Yae, J.H.; Shin, J.G.; Woo, J.H.; Kim, S.H. A Review of Ergonomic Researches for Designing In-Vehicle Information Systems. J. Ergon. Soc. Korea 2017, 36, 499–523. [Google Scholar]
  4. Mun, S.; Park, M.; Park, S.; Whang, M. SSVEP and ERP measurement of cognitive fatigue caused by stereoscopic 3D. Neurosci. Lett. 2012, 525, 89–94. [Google Scholar] [CrossRef] [PubMed]
  5. Mun, S.; Kim, E.; Park, M. Effect of mental fatigue caused by mobile 3D viewing on selective attention: An ERP study. Int. J. Psychophysiol. 2014, 94, 373–381. [Google Scholar] [CrossRef] [PubMed]
  6. Yu, C.; Wang, E.M.; Li, W.C.; Braithwaite, G. Pilots’ Visual Scan Patterns and Situation Awareness in Flight Operations. Aviat. Space Environ. Med. 2014, 85, 708–714. [Google Scholar] [CrossRef][Green Version]
  7. Hogervorst, M.A.; Brouwer, A.; van Erp, J.B.E. Combining and comparing EEG, peripheral physiology and eye-related measures for the assessment of mental workload. Front. Neurosci. 2014, 8, 322. [Google Scholar] [CrossRef]
  8. Jimenez-Molina, A.; Retamal, C.; Lira, H. Using Psychophysiological Sensors to Assess Mental Workload during Web Browsing. Sensors 2018, 18, 458. [Google Scholar] [CrossRef] [PubMed]
  9. Sungchul, M. Overview of Understanding and Quantifying Cognitive Load. J. Ergon. Soc. Korea 2018, 37, 337–346. [Google Scholar]
  10. Sargezeh, B.A.; Ayatollahi, A.; Daliri, M.R. Investigation of eye movement pattern parameters of individuals with different fluid intelligence. Exp. Brain Res. 2019, 237, 15–28. [Google Scholar] [CrossRef] [PubMed]
  11. Sekicki, M.; Staudte, M. Eye’ll Help You Out! How the Gaze Cue Reduces the Cognitive Load Required for Reference Processing. Cogn. Sci. 2018, 42, 2418–2458. [Google Scholar] [CrossRef]
  12. Demberg, V.; Sayeed, A. The Frequency of Rapid Pupil Dilations as a Measure of Linguistic Processing Difficulty. PLoS ONE 2016, 11, e01461941. [Google Scholar] [CrossRef]
  13. Majooni, A.; Masood, M.; Akhavan, A. An eye-tracking study on the effect of infographic structures on viewer’s comprehension and cognitive load. Inf. Vis. 2018, 17, 257–266. [Google Scholar] [CrossRef]
  14. Ooms, K.; Coltekin, A.; De Maeyer, P.; Dupont, L.; Fabrikant, S.; Incoul, A.; Kuhn, M.; Slabbinck, H.; Vansteenkiste, P.; Van der Haegen, L. Combining user logging with eyetracking for interactive and dynamic applications. Behav. Res. Methods 2015, 47, 977–993. [Google Scholar] [CrossRef]
  15. Hua, L.; Dong, W.; Chen, P.; Liu, H. Exploring differences of visual attention in pedestrian navigation when using 2D maps and 3D geo-browsers. Cartogr. Geogr. Inf. Sci. 2016, 44, 1–17. [Google Scholar]
  16. Anagnostopoulos, V.; Havlena, M.; Kiefer, P.; Giannopoulos, I.; Schindler, K.; Raubal, M. Gaze-Informed location-based services. Int. J. Geogr. Inf. Sci. 2017, 31, 1770–1797. [Google Scholar] [CrossRef]
  17. Asan, O.; Yang, Y. Using Eye Trackers for Usability Evaluation of Health Information Technology: A Systematic Literature Review. JMIR Hum. Factors 2015, 2, e5. [Google Scholar] [CrossRef]
  18. Sassaroli, A.; Zheng, F.; Hirshfield, L.M.; Girouard, A.; Solovey, E.T.; Jacob, R.J.K.; Fantini, S. Discrimination of Mental Workload Levels in Human Subjects with Functional Near-infrared Spectroscopy. J. Innov. Opt. Health Sci. 2008, 1, 227–237. [Google Scholar] [CrossRef]
  19. Herff, C.; Heger, D.; Fortmann, O.; Hennrich, J.; Putze, F.; Schultz, T. Mental workload during n-back task-quantified in the prefrontal cortex using fNIRS. Front. Hum. Neurosci. 2014, 7, 935. [Google Scholar] [CrossRef]
  20. Wilson, G.F.; Russell, C.A. Real-time assessment of mental workload using psychophysiological measures and artificial neural networks. Hum. Factors 2003, 45, 635–643. [Google Scholar] [CrossRef]
  21. Mueller, K.; Tangermann, M.; Dornhege, G.; Krauledat, M.; Curio, G.; Blankertz, B. Machine learning for real-time single-trial EEG-analysis: From brain-computer interfacing to mental state monitoring. J. Neurosci. Methods 2008, 167, 82–90. [Google Scholar] [CrossRef]
  22. Noel, J.B.; Bauer, K.W., Jr.; Lanning, J.W. Improving pilot mental workload classification through feature exploitation and combination: A feasibility study. Comput. Oper. Res. 2005, 32, 2713–2730. [Google Scholar] [CrossRef]
  23. Oh, H.; Hatfield, B.D.; Jaquess, K.J.; Lo, L.-C.; Tan, Y.Y.; Prevost, M.C.; Mohler, J.M.; Postlethwaite, H.; Rietschel, J.C.; Miller, M.W.; et al. A Composite Cognitive Workload Assessment System in Pilots Under Various Task Demands Using Ensemble Learning. In Proceedings of the AC 2015: Foundations of Augmented Cognition, Los Angeles, CA, USA, 2–7 August 2015. [Google Scholar]
  24. Lu, L.; Tian, F.; Dai, G.; Wang, H. A Study of the Multimodal Cognition and Interaction Based on Touch, Audition and Vision. J. Comput.-Aided Des. Comput. Graph. 2014, 26, 654–661. [Google Scholar]
  25. Zhang, G.H.; Lao, S.Y.; Ling, Y.X.; Ye, T. Research on Multiple and Multimodal Interaction in C2. J. Natl. Univ. Def. Technol. 2010, 32, 153–159. [Google Scholar]
  26. Wei, L.; Yufen, C. Cartography Eye Movements Study and the Experimental Parameters Analysis. Bull. Surv. Mapp. 2012, 10, 16–20. [Google Scholar]
  27. Chen, X.; Xue, C.; Chen, M.; Tian, J.; Shao, J.; Zhang, J. Quality assessment model of digital interface based on eye-tracking experiments. J. Southeast Univ. (Nat. Sci. Ed.) 2017, 47, 38–42. [Google Scholar]
  28. Smerecnik, C.M.R.; Mesters, I.; Kessels, L.T.E.; Ruiter, R.A.; De Vries, N.K.; De Vries, H. Understanding the Positive Effects of Graphical Risk Information on Comprehension: Measuring Attention Directed to Written, Tabular, and Graphical Risk Information. Risk Anal. 2010, 30, 1387–1398. [Google Scholar] [CrossRef] [PubMed]
  29. Henderson, J.M.; Choi, W. Neural Correlates of Fixation Duration during Real-world Scene Viewing: Evidence from Fixation-related (FIRE) fMRI. J. Cogn. Neurosci. 2014, 27, 1137–1145. [Google Scholar] [CrossRef]
  30. Lin, J.H.; Lin, S.S.J. Cognitive Load for Configuration Comprehension in Computer-Supported Geometry Problem Solving: An Eye Movement Perspective. Int. J. Sci. Math. Educ. 2014, 12, 605–627. [Google Scholar] [CrossRef]
  31. Wu, X.; Xue, C.; Gedeon, T.; Hu, H.; Li, J. Visual search on information features on digital task monitoring interface. J. Southeast Univ. (Nat. Sci. Ed.) 2018, 48, 807–814. [Google Scholar]
  32. Allsop, J.; Gray, R.; Bulthoff, H.H.; Chuang, L. Effects of anxiety and cognitive load on instrument scanning behavior in a flight simulation. In Proceedings of the 2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS), Baltimore, MD, USA, 23 October 2016. [Google Scholar]
  33. Nayyar, A.; Dwivedi, U.; Ahuja, K.; Rajput, N. Opti Dwell: Intelligent Adjustment of Dwell Click Time. In Proceedings of the 22nd International Conference, Hong Kong, China, 8–9 December 2017. [Google Scholar]
  34. Lutteroth, C.; Penkar, M.; Weber, G. Gaze, vs. Mouse: A Fast and Accurate Gaze-Only Click Alternative. In Proceedings of the 28th Annual ACM Symposium, Charlotte, NC, USA, 8–11 November 2015. [Google Scholar]
  35. Chengshun, W.; Yufen, C.; Shulei, Z. User interest analysis method for dot symbols of web map considering eye movement data. Geomat. Inf. Sci. Wuhan Univ. 2018, 43, 1429–1437. [Google Scholar]
  36. Paas, F.G.; Van Merri, J.J.; Adam, J.J. Measurement of cognitive load in instructional research. Percept Mot Skills 1994, 79, 419–430. [Google Scholar] [CrossRef] [PubMed]
  37. Meshkati, N.; Hancock, P.A.; Rahimi, M. Techniques in Mental Workload Assessment. In Evaluation of Human Work: A Practical Ergonomics Methodology; Taylor & Francis: Philadelphia, PA, USA, 1995. [Google Scholar]
  38. Zarjam, P.; Epps, J.; Lovell, N.H. Beyond Subjective Self-Rating: EEG Signal Classification of Cognitive Workload. IEEE Trans. Auton. Ment. Dev. 2015, 7, 301–310. [Google Scholar] [CrossRef]
  39. Paas, F.; Tuovinen, J.E.; Tabbers, H. Cognitive Load Measurement as a Means to Advance Cognitive Load Theory. Educ. Psychol. 2003, 38, 63–71. [Google Scholar] [CrossRef][Green Version]
Figure 1. Multi-modal interactive information integration model in a virtual reality system.
Figure 1. Multi-modal interactive information integration model in a virtual reality system.
Information 10 00170 g001
Figure 2. Eye movement assessment model of cognitive load in a virtual reality system.
Figure 2. Eye movement assessment model of cognitive load in a virtual reality system.
Information 10 00170 g002
Figure 3. Probabilistic neural network model.
Figure 3. Probabilistic neural network model.
Information 10 00170 g003
Figure 4. The cognitive load evaluated by the probabilistic neural network model is compared with the actual cognitive load.
Figure 4. The cognitive load evaluated by the probabilistic neural network model is compared with the actual cognitive load.
Information 10 00170 g004
Table 1. Testers distribution table.
Table 1. Testers distribution table.
Cognitive ChannelSubject Serial Number
Single-channel k = 1 Vision Q v 1, 2, 3, 4, 5
Dual-channel k = 2 Visual-auditory Q v Q h 6, 7, 8, 9, 10
Visual-tactile Q v Q t 11, 12, 13, 14, 15
Three channels k = 3 Visual-auditory-tactile Q v Q h Q t 16, 17, 18, 19, 20
Table 2. Independent variable.
Table 2. Independent variable.
Number of Cognitive ChannelsClasses
Single-channel k = 1 Vision Q v
Dual-channel k = 2 Visual-auditory Q v Q h Visual-tactile Q v Q t
Three channels k = 3 Visual-auditory-tactile Q v Q h Q t
Table 3. Cognitive load rating.
Table 3. Cognitive load rating.
Cognitive Load Layer00.20.40.60.810.1, 0.3, 0.5, 0.7, 0.9
MeaningExtremely low cognitive loadCognitive load is intensely lowCognitive load was significantly lowerCognitive load was significantly highCognitive load is intensely highExtremely high cognitive loadThe intermediate value of the neighboring judgment
Table 4. Subjective cognitive load questionnaire.
Table 4. Subjective cognitive load questionnaire.
Cognitive ChannelCognitive Load
Single-channel k = 1 Vision Q v 0.7
Dual-channel k = 2 Visual-auditory Q v Q h 0.5
Visual-tactile Q v Q t 0.2
Three channels k = 3 Visual-auditory-tactile Q v Q h Q t 0.1
Table 5. Dependent variable.
Table 5. Dependent variable.
Cognitive ChannelEye Movement Index
Eye Movement Index b 1 Mean Fixation Duration b 2 Average Pan Length b 3 The Number of Fixation Points at the First Mouse Click b 4 Number of Back Views b 5
Single-channel k = 1 Vision Q v 0.28120.75550.94920.55560.6000
Dual-channel k = 2 Visual-auditory Q v Q h 0.34380.68230.50240.33330.5000
Visual-tactile Q v Q t 0.28120.51150.63780.00000.4000
Three channels k = 3 Visual-auditory-tactile Q v Q h Q t 0.25000.25370.00300.00000.2000
Table 6. Subjective cognitive load.
Table 6. Subjective cognitive load.
Cognitive Channel CategorySingle Channel k = 1 Dual Channel k = 2 Three Channel k = 3
Vision Q v Visual-Auditory Q v Q h Visual-Tactile Q v Q t Visual-Auditory-Tactile Q v Q h Q t
Cognitive load0.70.50.20.1
0.60.50.40.2
0.90.70.50.05
0.70.40.30
0.70.40.40.06
Table 7. Normalized eye movement index data.
Table 7. Normalized eye movement index data.
Cognitive ChannelEye Movement IndexCognitive Load
Eye Movement Index b 1 Mean Fixation Duration b 2 Mean Fixation Duration b 3 The Number of Fixation Points at the First Mouse Click b 4 Number of Back Views b 5
Single- channel k = 1 Vision Q v 0.28120.75550.94920.55560.60000.6788
0.84380.44680.77311.00000.50000.5962
1.00000.60511.00000.66670.60001.0000
0.62500.58620.91760.55561.00000.7241
0.50000.50160.99020.66670.80000.7370
Dual- channel k = 2 Visual-auditory Q v Q h 0.34380.68230.50240.33330.50000.5043
0.62500.52880.68100.44440.30000.4934
0.37501.00000.66310.33330.30000.6684
0.43750.24240.80450.11110.40000.3620
0.71880.00000.55410.66670.70000.4344
Visual-tactile Q v Q t 0.28120.51150.63780.00000.40000.2279
0.31250.27020.41610.33330.30000.4133
0.09380.51540.36110.33330.30000.4516
0.56250.41370.36230.11110.40000.2586
0.06250.39820.49660.66670.50000.3646
Three channels k = 3 Visual-auditory-tactile Q v Q h Q t 0.25000.25370.00300.00000.20000.0751
0.03120.00510.00000.11110.20000.2011
0.12500.09350.21130.00000.00000.0523
0.00000.26750.19870.11110.10000.0000
0.25000.57750.14490.11110.10000.0621
Table 8. Correlation between each eye movement characteristic parameter and cognitive load.
Table 8. Correlation between each eye movement characteristic parameter and cognitive load.
Eye Movement Characteristic ParameterEye Movement Index b 1 Mean Fixation Duration b 2 Mean Fixation Duration b 3 The Number of Fixation Points at the First Mouse Click b 4 Number of Back Views b 5
r 0.6798782520.5598346940.8631827830.7544626150.754440400
Table 9. Maximum absolute error and relative mean square error.
Table 9. Maximum absolute error and relative mean square error.
Cognitive ChannelSubjective Cognitive LoadCognitive Load Quantified ValueAbsolute ErrorAverage Absolute ErrorMean of Mean Absolute ErrorRelative Mean Square ErrorMean Relative Mean Square Error
Vision0.67880.70.03120.10.1075750.09890.127675
0.59620.60.0064
10.90.1
0.72410.70.0333
0.7370.70.0502
Visual-auditory0.50430.50.00850.1050.1133
0.49340.50.0134
0.66840.70.0473
0.3620.40.105
0.43440.40.0792
Visual-tactile0.22790.20.12240.16010.2321
0.41330.40.0322
0.45160.50.1072
0.25860.30.1601
0.36460.40.0971
Visual-auditory-tactile0.07510.080.06520.06520.0664
0.20110.20.0055
0.05230.050.044
000
0.06210.060.0338

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Information EISSN 2078-2489 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top