Recognition of Customers’ Impulsivity from Behavioral Patterns in Virtual Reality

Featured Application: The results of this study can provide promising insights for the retailers both in physical and future virtual stores. Abstract: Virtual reality (VR) in retailing (V-commerce) has been proven to enhance the consumer experience. Thus, this technology is beneﬁcial to study behavioral patterns by offering the opportunity to infer customers’ personality traits based on their behavior. This study aims to recognize impulsivity using behavioral patterns. For this goal, 60 subjects performed three tasks—one exploration task and two planned tasks—in a virtual market. Four noninvasive signals (eye-tracking, navigation, posture, and interactions), which are available in commercial VR devices, were recorded, and a set of features were extracted and categorized into zonal, general, kinematic, temporal, and spatial types. They were input into a support vector machine classiﬁer to recognize the impulsivity of the subjects based on the I-8 questionnaire, achieving an accuracy of 87%. The results suggest that, while the exploration task can reveal general impulsivity, other subscales such as perseverance and sensation-seeking are more related to planned tasks. The results also show that posture and interaction are the most informative signals. Our ﬁndings validate the recognition of customer impulsivity using sensors incorporated into commercial VR devices. Such information can provide a personalized shopping experience in future virtual shops.


Introduction
Imagine that you can go shopping, walk down the aisles, look at the products, interact with them, and purchase them without ever leaving your house. New shops based on virtual reality (VR) [1] aim to provide this opportunity.
Virtual reality is a multi-sensory experience created by the real-time production of graphics in a multi-dimensional framework, depicted in a display technology that provides the user with the possibility of immersing in a virtual environment [2]. The first virtual system was presented by Morton Heilig in the early 1960s using a recorded colored film and the ambient properties of sound and scent, but no interaction with the environment [3]. In recent years, this technology has been improved in several aspects such as interactive graphics, head tracking, and interactions between individuals [2]. These technological advancements have resulted in using different kinds of VR apparatus in behavioral studies [4], since this technology provides researchers with the possibility of studying scenarios under controlled laboratory conditions [5], and it permits time-and cost-efficient manipulation of behavioral variables in comparison with real situations [6].
Virtual experience in marketing (VEM) for research is a relatively new concept, but the phenomenon has been used for three decades. The use of VR technologies in retailing applications started in the late 1990s [7]. Pioneer VR studies explored the use of virtual environments to simulate physical shopping experiences using low immersive systems such as computer screens and traditional input devices, e.g., mouse and keyboard [8,9]. In the 2010s, the number of studies using VR interfaces increased, but most of them still relied on medium-or low-immersion displays, such as fish tank interfaces or large stereoscreen systems [7]. For instance, the powerwall setup (a large screen with stereoscopic vision), as a low-immersive stereoscopic, was used to investigate consumer reactions to VR technologies [10]. Some studies also created a 3D web-based virtual supermarket using a low-immersive desktop to research consumer reactions to marketing strategies, including price and product labeling [11], emotional responses to retail ambiance [12], and reactions to empty shelf space [13]. In a more immersive approach, Van Herpen et al. [14] simulated a virtual supermarket using three 42-inch LCD screens that created a 180 • field of view. They used a choice task to compare using VR versus photographs of products. In this study, participants could navigate using a keyboard and mouse. Another fully immersive system that has been used in VEM studies is the head-mounted display (HMD), which blocks stimuli from the external world. This brilliant feature, which can provide simulated immersive experiences with a high sense of presence similar to the real world, has made VR an attractive tool for scientific study [15]. The first use of HMD interfaces in VEM was in 1995, to undertake supermarket redesigns with reduced costs [7]. Recently, this technology has attracted researchers' attention. As an instance, Bigné et al. [16] compared subjects' gaze patterns by presenting a 360 • video in an HMD and a 3D display. This relatively new technology has been used in several studies of consumer behavior [17][18][19][20][21][22][23]; however, extensive research should still be performed to shed light on every aspect of consumer behavior in VEM.
The popularity of VR has increased in recent years in academic and commercial frameworks [24]. Additionally, several efforts have been made to take advantage of VR technologies to replicate the significant achievements of using the internet in commerce [25]. VR technology is playing a prominent role in the contemporary business landscape [26]. Hence, physical as well as online retailers are progressively considering employing VR to endure in the current competitive market and grasp new opportunities [27]. As tangible instances, Alibaba, one of the most renowned retailers, as well as Coca-Cola, McDonald's, and IKEA are currently employing VR technology [28,29].
VEM properties, such as the sense of presence and transferability to reality [30][31][32], make it a suitable tool for studying human behavior. It is known that behavioral signals can reveal subconscious processes and psychological traits [4,33,34]. Thus, VEM can be used to track such signals to study and predict the psychological traits of shoppers as they interact with the virtual environment. Not only can such prediction be applied in the real environment, but it can also help retailers to provide personalized shopping experiences [18] as well as adaptive environments and advertisements [35] in VR-based retailing.
An important personality trait for investigation in VEM is impulsivity, since previous studies indicate that 39% of purchases are the result of impulsive buying [36]. Other studies show that about 50-70% of purchase decisions in supermarkets involve impulsive buying [37]. Impulsive buying is a complicated process in which shoppers decide to purchase immediately without considering the consequences [38]. Impulsive buying is conceived as a problem from the consumers' point of view, but from the marketers' perspective, it is a tactic to enhance the purchase rate [39,40]. In the late 1960s, Kollat and Willett [41] conducted one of the earliest studies of the relationship between impulsive buying and personality traits [42]. Although, the concept of impulsive buying was not new, in the early 1950s, some attempts were made to grasp the relation between impulsive buying and different product categories [43].
Impulse buying has been researched from two perspectives: the state of the mind created by the shopping environment [44] and the personality traits of consumers [45]. In this regard, several environmental cues have been studied to determine their effect on the consumer's state of mind and whether these cues urge consumers to buy impulsively [46]. Other studies have investigated the influence of consumers' impulsiveness on the intention to purchase [47,48]. To our knowledge, there has yet to be a study that investigates the relative effects of impulsivity on consumer behavior. Therefore, the goal of this paper is twofold: to examine the influence of impulsivity on consumer behavior using VR, and then, to analyze how behavioral patterns can predict impulsivity.
Impulsivity is an important psychological construct that appears in most major scales of personality [49]. In recent decades, several scales have been proposed to assess impulsivity [50][51][52][53]. Among them is the impulsivity behavior scale (I-8) [54], which is based on the conceptualization of this trait in [49]. It has been used in several studies [55][56][57]. The scale uses a questionnaire that indicates the impulsivity level of the subject in four dimensions: urgency, (lack of) premeditation, (lack of) perseverance, and sensation-seeking [57]. The trait of urgency is defined as the tendency to act rashly when distressed [58]. Premeditation means that the person tends to think and have a plan before acting [59]. Lack of perseverance is described as the inclination to quit tasks that are difficult or boring, and sensationseeking is the need to look for novel, thrilling, or risky stimulation [60].
In behavioral studies, statistical inference and machine learning are the two most frequently used tools for analysis. However, researchers have become more inclined toward exploiting machine learning [61], which can learn from instances to predict the relationship between inputs and outputs [62]. This property can be applied to data gathered from several types of physiological or behavioral sensors such as eye-tracking, electroencephalography (EEG), heart rate variability (HRV), and functional magnetic resonance imaging (fMRI) [63]. Moreover, these methods can be applied to low-cost, noninvasive signals, which can be accessible through VR headgear such as 3D eye movement, gaze information [16], eye blink rate [64], head movement and orientation, controller (hand) movement, and orientation [19]. These signals can be highly informative. For instance, head and eye movement imply macro and micro levels of attention allocation, respectively [4], and eye-tracking represents visual perception and processing, which are related to human behavior, cognitive status, and psychological activities [33]. By tracking the head position in 2D, the navigational pattern of the shopping travel can be developed. This is an important signal since-based on the literature-up to 80% of a shopper's in-store time is allocated to navigating [65].
Recent studies have been conducted in the VR environment to study impulsivity behavior. For instance, Ketoma et al. investigated impulsive buying by using a virtual grocery store implemented through an HMD. In this study, the analysis was performed based on post-interviews conducted after the VR experience [17]. A second study explored the impact of impulsivity on risky decisions through an immersive virtual reality game and physiological signals [66]. A third used self-assessment to investigate the impact of risky environments on the emerged level of impulsivity and sensation-seeking [67]. Although using VR in studies of consumer behavior is trending, to our knowledge, no research has been made to investigate and recognize the impacts of impulsivity on shopping behavior using virtual behavior tracking. Either the other works did not investigate behavioral signals or they did not focus on consumer behavior.
The current study has been conducted to examine the relationship between customers' behavior while shopping and their impulsivity. Moreover, it has explored which kind(s) of behavior(s) could represent different subscales of impulsivity. To achieve this, it was necessary to define some tasks that could elicit certain types of behavior so customers could be distinguished based on their impulsivity levels. Hence, the research questions for this study are: To what extent can shoppers' impulsivity be predicted by their behaviors while shopping in virtual shops?
Which types of behaviors and signals are more informative for predicting shoppers' impulsivity?
Which types of tasks elicit the discriminating human impulsivity behaviors? Which types of feature sets can distinguish subscales of impulsivity?

Participants
For this experiment, 60 healthy individuals (30 females and 30 males) were hired by the European Immersive Neurotechnology's Laboratory (LENI) of the Polytechnic University of Valencia (UPV) from outside of LENI. Three subjects were excluded because of the corruption of their recorded data, so the final sample had 57 subjects (27 females and 30 males with an average age of 25.12 and a standard deviation of 5.06 years). The experiment was conducted for 15 consecutive working days, and each participant spent an average of 45 min from preparation to the final stage. Informed consent was obtained from all subjects involved in the study, and the methods and experimental protocols were performed in accordance with the guidelines and regulations of the local ethics committee of UPV.

Description of the Virtual Environments
To conduct this research, a virtual market that was 6 × 6 m 2 and had three sections was designed. The first section, which covered most of the shelves, was dedicated to ordinary daily groceries such as milk, soda, and sauces with fake brands. The second section was a shelf of snacks, and the third was a shelf of sneakers. This virtual shop comprised 6 sets of joined shelves, which are marked with red numbers in Figure 1a. The total height of all the shelves was equal to 210 cm. The bottom shelf was 60 cm high, and it was located 30 cm above the ground. The middle and upper shelves were 55 cm high, and 10 cm was dedicated to the upper edge of the shelves (Figure 1b). Participants could move inside the virtual environment with natural walking, since the dimensions of the virtual store were equal to that of the special tracking zone ( Figure 2). The interaction was designed to be as natural as possible, and it consisted of picking, dropping, throwing, and buying. Additionally, the physical equations were used to try to simulate shadows and to make the movement of objects seem natural.
Several stages and tasks were designed to be performed, namely, pre-experiment contest and questionnaire, training task, calibration, exploration task, two planned tasks for buying snacks and a pair of sneakers, and post-experiment questionnaire.

The Training Task
Before the main tasks, the participants undertook a training task to become familiar with the interfaces. In this task, they learned how to interact with the objects, how to buy the objects, and how to understand which object they could not buy in the tasks of the experiment. To avoid any bias towards the contents and materials of the original tasks, the room was quite neutral, even the shape of the objects did not imply anything about the virtual store. In this task, participants could walk and interact with the objects. They could pick, drop, and throw all the objects, but they could buy only green objects ( Figure 3). When they tried to buy a red object, an alarm sound was played in the integrated headphones in the HMD. When they bought green objects, a sound indicating accomplishment of the purchase played, and the object vanished.

The Exploration Task
At the beginning of this task, the HMD and headphones were readjusted on the participant's head. Then, the integrated eye tracker in the HMD was calibrated. After calibration and reading the instructions about the duties and abilities during the task, the subjects started the task. They were instructed to navigate freely in the virtual store and interact with products with a time limit of 5 min. In this task, the participants were not allowed to buy the products. This task represents unplanned browsing behavior, in which the customer does not have a specific shopping list in his mind; the goal is to visit the shop. The tasks always started in the blue circle shown in Figure 4, and if the participant finished a task sooner than the planned time, he could go back to the blue circle. If participants ran out of time, they could not interact with the objects, and a message appeared in the virtual environment asking them to go back to the blue circle. The remaining time was shown on the virtual controller ( Figure 4). After finishing the task, the participant in the blue circle answers a questionnaire embedded in the VR.

The First Planned Task: Buying Snacks
This task simulates the behavior of the shoppers when they have a certain goal of shopping. They were asked to perform a forced search task in which they were to purchase some potato chips with a limited budget of EUR 5 given to them. Participants could purchase less than their maximum budgets. There was only one shelf containing the target products, which is shown in Figure 5. During the task, they could pick and read the details and drop all the products, but they could buy only snacks. After it was purchased, the product vanished, and they could not put it back.

The Second Planned Task: Buying Sneakers
Like the first planned task, participants were instructed to purchase a pair of sneakers with a limited budget of EUR 180. In this task, shoppers were told they could choose from different colors, types, and prices. Figure 6a shows the shelf containing the sneakers and their colors, prices, and locations, and Figure 6b shows the position of the target shelf in the virtual environment. The difference between the planned tasks is the type and the price of the products. The products in the first planned task did not require much time to think before taking a decision, but in the second planned task, it was assumed that the participant would think more before selecting a product.

Apparatus and Signals
The virtual environment for the experiment was developed using the Unity 3D V2018.2.21f1 game engine [68] and immersively realized through HTC Vive Pro Head Mounted Display (HMD, HTC Corporation, Taoyuan City, Taiwan) [69]. Besides, the communication of the HMD and the four Base Stations in the corners of the special tracking zone were wireless, which facilitated the movement in the virtual shop. The participants could interact with the virtual objects using HTC Vive Pro Controllers (HTC Corporation, Taoyuan City, Taiwan).
During the experiment, sensors implemented in the HMD provided eye-tracking data with a nominal 120 Hz sample rate (practical in this experiment with mean 76.1 Hz and standard deviation of 8.4 Hz) with an accuracy of 0.5 • -1.1 • and head position data with a sampling rate equal to that of eye-tracking [70]. In addition, hand positions were provided by the sensors in the controllers. The Unity game engine recorded every event in the experiment, such as picking up and dropping the products and purchasing them.

Data Preprocessing
Data pre-processing was performed in two steps. In the first step, corrupted data were removed. In this phase, 3 participants were excluded. In the second step, features were extracted from the raw data. These analyses were done using Python 3.7 in the Jupyter environment.

Feature Extraction
Eye-tracking (ET) features extracted using simple kinematic definitions and data points were classified into fixation and saccades using the algorithm presented in [71]. Moreover, the following signals were defined. Navigation (NAV) is the projection of the head position in the 2D environment on the floor plan. Posture (POS) was 3D head and hand positions. All the interactions with the products such as picking, dropping, and purchasing were considered to be interaction (INT For all the features except INT, two main categories were defined. The first category, called general features, considered the whole experiment, and the second category, called zonal features, applied to areas of interest (AOI) and zones of interest (ZOI). In both categories, three sub-categories were defined: temporal, spatial, and kinematic, which are related to time, space, and movement, respectively. Hence, each set of features comprises six sub-categories: general-temporal, general-spatial, general-kinematic, zonal-temporal, zonal-spatial, and zonal-kinematic.
In terms of ZOIs, the floor plan was segmented into four ZOIs: Shelf, Adjacent, Near, and Far, based on the proximity to the shelves. As shown in Figure 7a-c, all the zones with the same color are in one ZOI and the rest of the areas fall into the Far zone. Besides, as it is presented in Figure 7d, in all the ZOIs except Far, the vertical dimension of the shop was divided into three levels, namely Up, Middle, and Down based on the levels of the shelves. By combining the ZOIs and levels, the space in the virtual shop was divided into AOIs, which are called Shelf_Down, Adjacent_Up, etc. Note that, in the Far zone, the whole vertical dimension is considered to be one level. This means that, in that ZOI, the entire space is considered to be one AOI, which is called "Far". Using a genetic algorithm (GA) with a modified Fisher criterion as a cost function, the optimal widths of Adjacent and Near were set at 18 and 13 cm, respectively [19]. Note that the lengths of these two zones are fixed and equal to the lengths of the shelves. As an exception, in the planned tasks, the Near zone covers all the front space of the shelf after the Adjacent zone; only the width of Adjacent had to be determined. This width was also considered to be 18 cm, as it is found in the exploration task.

Characterization of Impulsivity
In this study, the goal was to classify shoppers based on their impulsivity. For this purpose, each participant completed the I-8 [54] questionnaire to indicate their impulsivity level. This model is defined by a 5-point Likert scale, where 5 means a high level of impulsivity. This questionnaire had 4 subscales to determine the levels of urgency, premeditation, perseverance, and risk-taking. The levels of urgency and sensation-seeking were in line with impulsivity, and the levels of perseverance and premeditation were inversely related to impulsivity. This means high scores in perseverance and premeditation imply a low level of impulsivity.
In this study, the shoppers were divided into two categories based on the median of total scores. The total scores were calculated by averaging the scores of the answers.
Note that the scores for questions related to perseverance and premeditation were reversed by subtracting the score from 6.

Machine Learning
For classifying the shoppers based on their behavior inside the virtual market, a support vector machine (SVM) classifier and some other steps were used. They are described in the following sections.

Normalization
The features were normalized according to their maximum and minimum to be mapped in the interval of zero and one.

Feature Selection
When the number of features is high, the model is likely to become over-fitted on the training samples [72]. To avoid this issue, the dimensions of the feature sets are reduced in some steps. In the first step, the following criteria were applied to remove some features.

•
The standard deviation of the feature vector is zero, or the normalized standard deviation to the mean is infinitesimal (σ/µ < 10 −10 ); • The feature is zero for over 80% of the subjects; • The feature vector is highly correlated to another feature vector (Pearson correlation coefficient > 0.95).
In the second step, the features were reduced to a maximum of 50 using the area under the curve (AUC) filtering method [73]. This step did not apply to the sets of features with dimensions lower than 50, e.g., NAV.
In the last step, the features were reduced to a dimension between 5 and 10 by applying the backward elimination (BE) method. This step included the freedom to choose an optimal number between 5 and 10 features based on a cross-validation score. The classifier for performing BE was SVM.

Classification with Cross-Validation
The SVM classifier was chosen to model the behavior of the shoppers. Here, a crossvalidation method, i.e., stratified K-folds cross-validation with 10 folds, was used. In this method, each subject could be used to test the accuracy of the model created with the rest of the observations. The folds helped to reduce the impact of diversity in the distributions of the testing and training data and tune the hyper-parameters, as summarized in Table 1. Besides, to reject the effects of variability, this procedure was repeated in 50 different runs. In the end, the average of the accuracy for all the repetitions was reported as the prediction accuracy. A pseudo-code for machine learning is summarized in Table 1.   • Apply BE to reduce the dimension between 5 and 10 according to K-fold with K = 10.

•
Save the selected feature set (for further analysis). • for 10 folds do: Split the data into training and testing data. Test different hyper-parameters and achieve accuracy.

Impulsivity Self-Assessment
Applying the method outlined in the section on the characterization of impulsivity, the observations were divided into two significantly different groups based on their pvalues in Table 2. Based on Table 2, the groups were balanced, and the centers for the populations in each group were different. The results for Cronbach's alpha for all the scales and subscales except sensation-seeking were consistent for the sample.

Recognition of Impulsivity
The machine learning results are presented in two sections: accuracy and feature selection.

Results for Accuracy
The results of machine learning for impulsivity and its subscales are reported in Table 3. Each scale has two columns consisting of the average of the accuracies and Cohen's kappa coefficient, with their standard deviations in parentheses. The results are categorized based on each signal, and each signal is divided based on the tasks. Besides, for HBT data, the results of combinations of the tasks are reported in Table 4. In that table, to create the second row, the features of the two planned tasks are put together. In the same way, for the third row, all the tasks are combined. Moreover, the colors in these tables are chosen according to the kappa. Green represents the most accurate result, and red represents the least accurate result. Correspondingly, the higher the kappa, the closer the color is to green.
To recognize impulsivity, kappa quantities in Table 3 indicate that the best combination was the POS & INT signal and the exploration task with a kappa of 0.64. Additionally, the navigational pattern in this task could show a distinction between the different levels of impulsivity. Moreover, the exploration task could elicit impulsivity and provide a more precise classification overall. For its part, the ET signal did not provide an acceptable level of discrimination. Based on Table 4, combining the signals in HBT improved the kappa by more than 10%, but combining all the tasks did not improve kappa by a considerable amount (less than 3%). In addition, as Table 3 shows, the highest kappa achieved in the first and second planned tasks separately was 0.36, but combining the second and third tasks and using HBT data yielded a kappa of 0.56.
Considering the trait of urgency, Table 2 shows that the best signal was POS & INT in the second planned task with kappa equal to 0.44. According to Table 3, when the second planned task was performed, urgency could be predicted by any of the three signals. However, in the exploration task, none of the signals recognized this trait. Table 4 suggests that to recognize this trait, combining the signals and the tasks can improve the results by a considerable percentage (more than 18%).
In the case of premeditation, the kappa quantities in Table 3 show that ET and NAV signals failed to create any distinction in the levels of that trait. However, the POS & INT signal provided more acceptable results in the exploration task with kappa equal to 0.39. Based on Table 4, combining the signals and the tasks led to considerable improvement in kappa, such that premeditation could be predicted in almost all the tasks. Hence, to predict premeditation, Table 4 suggests considering all the signals together; otherwise, the results will be poor, according to Table 3.  For the trait of perseverance, Table 3 indicates that it could not be recognized by ET and NAV signals, and the best result belonged to the POS & INT signal in the second planned task, which was relatively weak (with a kappa equal to 0.38). On the other hand, Table 4 shows an improvement in kappa of more than 65% by combining the planned tasks and using all the signals together. Moreover, if the exploration task is also included in the prediction, the improvement in kappa is around 79%. This result suggests to use all three kinds of tasks and all the signals together to recognize perseverance.
Of all the dimensions in Table 3, sensation-seeking was the easiest trait to predict. According to the kappa amounts in that table, POS & INT and ET signals predicted the level of sensation-seeking successfully in almost all the tasks. However, the NAV signal produced weak results (less than 0.31 in kappa). Moreover, Table 4 shows that using both planned tasks and combining the signals improved the best result in Table 3 (25% improvement by yielding a kappa of 0.56). However, combining the exploration task with the other tasks did not provide any notable improvement (only 3%).

Results of Feature Selection
The results for feature selection are shown in Figure 8; Figure 9. These two figures represent the normalized number of the occurrences of features extracted from the signals in each category in 50 runs.  To recognize impulsivity, Figure 8 shows that general features, with a selection frequency of 55.26%, contributed more than zonal features. Additionally, this figure shows that among the general features, the temporal features were more important with 22.08% occurrences. The kinematic category was second in rank. On the other hand, Figure 9 shows that kinematic features in total were selected more, with 41.54% of all the selected features, and a bigger portion of them belonged to the zonal category. The kinematic category became dominant because the margin between the kinematic and temporal categories was slim in the general features.
For the urgency trait, the zonal and general features contributed almost equally-51.01 and 48.99%, according to Figure 8. In both categories, kinematic features dominated. Figure 9 also confirms that the kinematic category had the highest number of selections with 42.04% of the occurrences.
For the trait of perseverance, Figure 8 shows that zonal features appeared more among the selected features by 61.10 versus 38.90%. In the zonal category, kinematic features were dominant. Figure 9 also shows the superiority of kinematic features. However, this is a very narrow advantage-around 2%.
Premeditation and sensation-seeking had almost the same results based on Figure 8. For these two traits, zonal features led, and according to Figure 9, kinematic features were more important.

Results Discussion
This study was conducted to investigate the possibility that impulsivity (based on the I-8 model) could be predicted using the behavior of consumers while they shopped. The results showed that POS & INT features were the best signal for predicting impulsivity with a kappa equal to 0.64 in the exploration task. That was the best result amongst all the investigated signals. The results also showed that impulsivity and body posture are strongly correlated. This is in line with previous studies that showed that body posture could affect impulsive buying [74]. Moreover, the exploration task was very informative for revealing impulsivity. That is logical, since impulsive purchases happen when the customer decides to buy a product suddenly-not when he has a target product or a purchase plan. On the other hand, general category and kinematic features were more selected for impulsivity recognition. This means that impulsivity changes the overall shopping behavior considering the whole shopping time.
For the trait of urgency, the planned task of buying sneakers was more informative. In that task, the shoppers were told to buy only one pair of sneakers. Those are purchased only occasionally, and they have relatively high prices. However, in the second task, the participants could choose more than one daily cheap product. It seems that purchasing sneakers put more emotional pressure on the shoppers, and this, in turn, elicited the trait of urgency, which is the tendency to act rashly in response to heightened effect. This impact could be recognized in all the signals recorded from the participants. This means that if the task is chosen carefully, urgency can be clearly recognized in ET, NAV, and POS & INT signals, with POS & INT more discriminative than the other two signals.
Premeditation was the hardest trait to predict. According to our findings, to recognize this trait, the combination of all the tasks and signals was required. The results show that premeditation had a low impact on the behavior of the person. However, this trait, which is related to having a plan before doing a task, was evoked more by the exploration task. It can be inferred that when there is a plan to make a purchase, all the customers act similarly, but when there is no prior purchase plan, individuals with a high level of premeditation face a new situation and act differently. Furthermore, this trait is correlated with zonal and kinematic features. It seems when shoppers are in the zone of interest and must take a decision, they act differently if their personality requires them to have a plan. Otherwise, this trait does not change the behavior in the rest of the task period significantly.
On the second level of difficulty of prediction, perseverance required all the signals and the tasks to be combined. However, the POS & INT signal was more informative than the others. Besides, the second and then the first planned task evoked perseverance more than the exploration task. Additionally, the combination of these two tasks could improve kappa by a considerable amount. This trait is related to following a plan to the end. Hence, planned tasks elicit this trait more, since an exploration task has no plan to accomplish. Moreover, zonal features contributed more toward recognizing this trait. This is because the person accomplished the plan in the zones of interest.
In this study, the easiest trait to predict was sensation-seeking, since it could be recognized in all the tasks and through two out of three signals, i.e., ET and POS & INT signals. Regarding the tasks, this result can be justified by the fact that there are risks and novel senses to be explored. On the other hand, this trait affects behavior more when the person is in the zones of interest, based on our findings.
By observing the study as a whole, HBT signals, including eye-tracking, body posture, navigation, and interaction can reveal impulsivity and very likely other personality traits. In addition, the results show that combining tasks improves the accuracy of the predictions.
Based on our findings, the best signal for recognizing impulsivity and its subscales is POS & INT, with ET at the second level. In addition, the best type of features to extract was the kinematic category. Overall, it can be inferred that kinematic features in all the dimensions were selected more, which in turn, conveys that they were more informative. Moreover, regarding the zonal and general categorization for recognizing premeditation, perseverance, and sensation-seeking, zonal features were more informative. This result in impulsivity and urgency was the opposite. Hence, it cannot be inferred that since zonal features were more important in most of the subscales, impulsivity should follow the same pattern. That can be justified, because impulsive behavior can happen during the whole shopping journey-not just in choosing products impulsively but also in impulsive turns, changes in eye direction, etc. Urgency is also a trait that the person shows when feeling the shortage of time, which is related to the whole task. On the contrary, premeditation, perseverance, and sensation-seeking are more related to product selection, since if there is no product to select, there is no need for a prior plan, no task to be completed, and no risk to be taken, and product selection happens mostly in the areas and zones of interest. On the other hand, humans' traits are very complex and are not linearly dependent on each other, even if they are highly correlated with each other. Hence, if we consider the nonlinear relationship between impulsivity and its subscales, we should not expect a direct relationship between them.

Limitations
Along with the achievements of this study, we faced some limitations. First, the number of subjects (60) was low because of cost limitations. This number restricted us to using simple models such as SVM, K nearest neighbor (KNN), and decision tree classifier (DTC), and we selected SVM as our machine learning solution. With a higher number of participants in the future, it will be plausible to use more sophisticated machine learning methods such as neural networks or deep learning methods to achieve more accurate results. Moreover, self-assessment methods for examining personality traits have limitations. In those methods, the person may be unable or unwilling to respond correctly. Besides, there are some technological limitations of the current performance of HMDs, such as vergence-accommodation conflict, which happens when the brain identifies a discrepancy between the distance of a virtual 3D object and the focusing distance needed for the eye to perceive the object [75], and the screen-door effect, which consists of observing the mesh between pixels in the near digital screens [76].

Conclusions and Future Research
The main achievement of this study is that using only noninvasive signals such as eye tracking or head and hand position, which can be easily be obtained through VR facilities, consumer impulsivity and its subscales can be predicted. This will help to tackle one of the emerging obstacles that future virtual retailers will face. Using these findings, in future virtual shops, retailers can provide a pleasant personalized shopping experience for their customers, which in turn, increases the enjoyment of shopping. This method can be extended to other personality traits, and it can be transferred to physical stores. In the future, this kind of research can be conducted with higher resolution devices to address the limitation of the screen-door effect. Moreover, the same methodology can be replicated in the real stores with real customers using augmented reality glasses and body tracker cameras. This would increase the number of participants and ecological validity. On the other hand, this methodology likely could be used to predict demographic features of customers such as age and gender. Considering that for shopping in virtual shops, it is not required to exit your home, it will be a great help for elderly and disabled individuals to find their target products in a shorter period, and this is feasible through personalizing the shopping experience. In the future, it is recommended to acquire data on traits through games or electrophysiological signals in the research environment and noninvasive signals such as HBT in commercial environments to cross-check the assessment results. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions on privacy policy on sensitive data categories.

Conflicts of Interest:
The authors declare no conflict of interest.