Next Article in Journal
Exploring the Influence of Public Perception of Mass Media Usage and Attitudes towards Mass Media News on Altruistic Behavior
Previous Article in Journal
Reward Behavior Disengagement, a Neuroeconomic Model-Based Objective Measure of Reward Pathology in Depression: Findings from the EMBARC Trial
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Identity Recognition Model Based on RF-RFE: Utilizing Eye-Movement Data

Public Security Behavioral Science Lab, People’s Public Security University of China, Beijing 100038, China
*
Author to whom correspondence should be addressed.
Behav. Sci. 2023, 13(8), 620; https://doi.org/10.3390/bs13080620
Submission received: 15 June 2023 / Revised: 20 July 2023 / Accepted: 20 July 2023 / Published: 26 July 2023

Abstract

:
Can eyes tell the truth? Can the analysis of human eye-movement data reveal psychological activities and uncover hidden information? Lying is a prevalent phenomenon in human society, but research has shown that people’s accuracy in identifying deceptive behavior is not significantly higher than chance-level probability. In this paper, simulated crime experiments were carried out to extract the eye-movement features of 83 participants while viewing crime-related pictures using an eye tracker, and the importance of eye-movement features through interpretable machine learning was analyzed. In the experiment, the participants were independently selected into three groups: innocent group, informed group, and crime group. In the test, the eye tracker was used to extract a total of five categories of eye-movement indexes within the area of interest (AOI), including the fixation time, fixation count, pupil diameter, saccade frequency, and blink frequency, and the differences in these indexes were analyzed. Building upon interpretable learning algorithms, further investigation was conducted to assess the contribution of these metrics. As a result, the RF-RFE suspect identification model was constructed, achieving a maximum accuracy rate of 91.7%. The experimental results further support the feasibility of utilizing eye-movement features to reveal inner psychological activities.

1. Introduction

When a person’s eyes start to avert, you have to be careful. Maybe he/she is starting to think about how to make up lies. How to accurately identify lies has been the topic of human research for ages. The human method of identifying lies has gone through three stages [1]: the divine knowledge method, the criminal knowledge method, and the instrument knowledge method, among which the human knowledge method runs through. The divine knowledge method refers to relying on the gods to judge the truth or falsity, while the criminal knowledge method relies on physical torture or disguised physical torture to identify the truth or falsity of words. Compared with the original divine knowledge method and the cruel criminal knowledge method, the human knowledge method relies on human experience and wisdom to make judgments without the use of external means, but this inevitably leads to subjective assumptions and is difficult to promote due to certain conditions. In contrast to the human recognition method, the instrument knowledge method identifies lies by means of equipment, mainly by recording physiological changes controlled by the autonomic nervous system, which can be objectively measured and recorded with objectivity and stability, thus reducing the impact of subjective errors in judgment caused by the internal tendencies of police officers.
With the rapid development of psychology, neurology, computer science, and other disciplines, tools and methods for identifying lies have been gradually improved. In 1921, Larson developed an instrument capable of recording blood pressure and respiration by combining a sphygmomanometer and a respirometer [2]. In 1926, Leonarde Keeler further improved the polygraph by combining a GSR device with the original foundation and also enhanced the portability of the polygraph [3]. Reid further improved the accuracy of the polygraph by simultaneously recording the test participants’ blood pressure, pulse, respiration, skin current, and muscle activity in 1945. Until the 1960s, polygraphs began to develop in the direction of electronics, and in this process, more and more researchers used eye-tracking technology to conduct tests due to its stability and objectivity, low interference, and its ability to eliminate the psychological defenses of offenders to a greater extent. At the same time, on the basis of accumulated practical experience, experts developed a variety of targeted polygraph methods such as GKT (Guilty Knowledge Test), QCT (Question Crossing Test), and CQT (Control Question Test). The GKT is also employed as an experimental method in this study. Derived from the “Odd ball task” paradigm, the GKT consists of probe stimuli and irrelevant stimuli. The probe stimuli primarily contain information related to the occurrence of a crime, while the irrelevant stimuli are random and unrelated to the measurement target. The objective of the GKT is to elicit participants’ recollection of knowledge and personal experiences. If a participant lacks certain knowledge or experiences, distinct indicator features will be observed before the testing apparatus, and these indicator differences are utilized to determine the participants’ association with the case or their involvement in criminal activities.
Among them, the GKT has received extensive research support, primarily focused on simulated experiments. Ben-Shakhar and Furedy [4] summarized ten studies in which 84% of criminals and 94% of innocent individuals were correctly classified. Elaad [5] conducted a review of 15 simulated crime studies, revealing that the average detection rate for guilty participants was 80.6%, while for innocent participants, it was 95.9%. It is noteworthy that among these 11 studies, no false positive identifications were observed, meaning innocent suspects were not wrongly classified as guilty. Nevertheless, due to the limitations of the GKT itself and considering the actual operation process, this paper also encountered some difficulties in the experimental process: (1) there was a fragmentation between simulated crime scenes and real crime scenes, which was not realistic enough; (2) there were false confessions by the perpetrators, and the previous way through oral questioning and identification was not objective and accurate enough; and (3) there were many dimensions of experimental data, which were difficult to analyze.
To overcome these problems, we improved the experiment. (1) In order to create a more realistic crime environment, a “task-reward” stimulus was introduced in the experiment. Prior to the formal experiment, a questionnaire was used to investigate how much “crime gain” was needed to motivate people to commit crimes when they chose their own identity to complete the “crime task” and thus underwent different psychological pressures. (2) In the test session, we used a mixture of portraits, objects, and scenes and mixed the pictures of the crime with the accompanying pictures for more objectivity. (3) Instead of using traditional common indicators, such as skin electricity, respiration, heart rate, and blood pressure, the participants were experimented on with the less disturbed eye-movement technique, and by extracting several eye-movement indicators, the experimental data were processed with great difficulty when analyzing multi-dimensionally, and the interpretable machine learning method was adopted.
In this paper, based on eye-tracking technology, we obtain the eye-movement indicators of three groups of people who choose to play innocent, informed, and criminal identities in simulated scenario crimes, these being their eye-movement indicators while viewing portraits, objects, and scene pictures. We then use the obtained indicators to construct a suspect identification model. The second part focuses on suspect recognition, eye-tracking technology recognition, and interpretable learning applications. The third part details the experimental design and the experimental results obtained using statistical analysis methods. The fourth section then describes the random forest model based on the RFE algorithm and analyzes which features contribute more to the prediction results of the model so that more accurate identification can be achieved with fewer metrics.

2. Literature Review

2.1. Suspect Identification Research

Footprints are one of the most common pieces of real evidence at crime scenes and play an important role in determining the identity of a suspect [6]. In Western countries, police consult doctors to solve cases in the hope that they can help analyze the footprints found at the crime scene and provide direction to identify the perpetrator [7]. However, it can be a challenge for investigators because it may be incomplete or have its clarity affected by external conditions, thus making identification more difficult. Currently, the Gunn Method, the Optical Center Method, the Overlay Method, the Reel Method, etc., are commonly used for footprint analysis [8]. Norman Gunn was the first to develop footprint analysis by taking linear measurements of footprints at different locations [9]. Robbins et al. created regressions of the length and width of footprints on height and weight by collecting footprint data from 500 people as well as height and weight information, noting that each person’s footprint is unique [10]. Kennedy [11,12] et al. collected flat footprints from 3000 people from 1995 to 2005, and by selecting representative populations, collecting samples repeatedly at regular intervals, and using computers to extract a total of 38 eigenvalues for input into the footprint library with the help of detectives, a mismatch rate of up to one part per billion of the footprints was achieved using this method. With the development of information technology, some researchers began to use pressure sensors to obtain footprints. Jung et al. used pressure sensors to collect footprint samples from 120 people and extracted features such as footprint area and pressure center to identify the participants with an accuracy of 97.8% [13]. In addition, there are many tracking methods, such as face recognition, that are used in many scenarios to help track drug traffickers, find missing persons, monitor suspects, etc. Abudllah et al. used PCA to implement face recognition, especially for detecting cases where no fingerprints were left at the crime scene [14]. Kakkar and Sharma constructed a crime recognition system using the Haar cascade classifier, which tags images by finding specific Haar features in the image and allows for the comparison of scanned images with still images or video streams to complete the detection [15]. When performing identification, lip prints were confirmed to be unique to a person as well [16]. By extracting the lip prints of 100 participants, Dwivedi et al. not only confirmed that the difference between male and female lip prints was statistically significant, but also that the matching rate could reach 82% with good identification [17]. Scent can also be a vehicle to identify suspects. In the experiment of Penn et al., 197 individuals were sampled every two weeks for ten weeks, and the results showed that for a correct identification, the whole profile needs to be sampled [18]. The experiment by Cuzuel et al. was performed by sampling hand odors and characterizing them using GC × GC–MS chromatography. Further, using a Bayesian framework made it possible to provide probability estimates of odor samples coming from the same person with an accuracy higher than 98%, which is suitable for application in forensics [19].

2.2. Eye-Movement Technology in Identifying Criminal Suspects

As technology continues to evolve, more and more scholars are using eye-tracking technology for human identification. Papesh found that pupil diameter is positively correlated with the cognitive load on the person processing the stimulus, and that as the cognitive load increases, the diameter of our pupil increases accordingly [20]. It is also important to note that the change in pupil diameter is not controlled by subjective consciousness, which provides strong evidence that changes in pupil diameter can determine whether or not a person is lying. In their experiment, Walczyk et al. divided the participants into three groups: honest, unrehearsed lying, and rehearsed lying [21]. By watching a video of a real crime and answering questions, the study found that the honest group had the fastest response time and the smallest mean pupil diameter, indicating the least cognitive load. In a simulated crime scenario experiment, Rebecca found that the pupil diameter of perpetrators was significantly larger than that of non-perpetrators when viewing pictures related to the crime scene, while no significant difference was observed when viewing pictures unrelated to the crime scene [22]. Ryan analyzed the differences in eye-movement indicators between participants viewing familiar and unfamiliar faces and found that the fixation time on familiar faces was significantly longer than that on unfamiliar faces and the number of gaze points within the familiar face area was significantly higher than that in the unfamiliar face area [23]. By testing the saccade frequency of the same participants, Vrij et al. found that the saccade frequency was higher when telling lies than when not lying, but there was not enough evidence to prove the significance level [24].

2.3. Application of Machine Learning in Identity Recognition

In addition, some scholars also introduced machine learning into the field of predicting crime and identifying criminal suspects. Wang et al. classified fingerprints based on deep neural networks to classify and predict suspicious fingerprints and take softmax regression to improve classification accuracy [25]. Li extracted features based on criminal records, constructed a model using SVM, calculated the similarity between predicted features and the features of people in the library in the alternative library, and then predicted the suspects [26]. Based on the analysis of property crime patterns, Li et al. further explored the nonlinear relationship between factors and property crime using a neural network model and developed a prediction model [27]. Gruber et al. generated a graphical network based on Bayesian to describe the behavior patterns of suspicious and non-suspicious users to identify suspected criminal cell phone users, and the experimental results showed that the false positive rate was less than 1% [28]. Gao et al. used five machine learning methods to analyze terrorist attack data with a maximum prediction accuracy of 94.8%, helping to target criminals for effective combat [29]. Zemblys et al. trained the classifier based on random forests for fixation time, sweep, and other eye-movement events, and the classifier performance approximated that of manual coding by eye-movement experts and had a lower error rate compared to previous studies [30]. Zhang et al. trained the model based on the XGBoost algorithm and used Shapley to explain the contribution of the variables in it to derive a ranking of factors influencing regional crime, which helps the police take targeted measures for each location [31].
In summary, suspect identification research has focused on developing effective methods to distinguish between guilty and innocent individuals. Studies utilizing eye-movement technology have shown promising results in this regard. By analyzing eye-movement patterns during the viewing of stimuli such as portraits, objects, and scenes, researchers have been able to extract valuable indicators for identifying suspects. The application of machine learning algorithms has greatly enhanced the analysis and interpretation of eye-movement data for identity recognition. These algorithms enable the extraction of meaningful features from eye-movement patterns and the development of robust models for suspect identification. Machine learning techniques, combined with eye-movement technology, offer a powerful approach to improving the accuracy and efficiency of suspect identification processes. In conclusion, the integration of suspect identification research, eye-movement technology, and machine learning algorithms holds significant potential for enhancing identity recognition in criminal investigations. Therefore, by utilizing the GKT methodology in designing simulated crime experiments, capturing various eye-movement indicators through eye-tracking technology, and employing machine learning methods to construct a suspect identification model, we can provide support for existing research findings and enhance the accuracy of suspect identification.

3. Experiment

3.1. Experimental Design

To further investigate the effectiveness of eye movements in inferring participants’ identity, the experimental procedure was designed according to the GKT. A total of 98 participants were recruited for the experiment, including 59 males and 39 females. A total of 8 participants took part in the preliminary experiment, while 90 participants participated in the formal experiment. All participants participated voluntarily. During the formal experiment, seven participants were unable to capture their signals by the eye tracker due to fatigue and loss of positioning, so the experimental data were excluded. A total of 83 valid data points were finally collected, with an age range of 18–24 years. These participants were recruited from the sophomore, junior, and first-year graduate students at the People’s Public Security University of China. Among the collected valid data, there were 53 male and 30 female participants, all of whom reported no history of illness or visual impairments. The experiment was approved by the Academic Ethics Committee of the People’s Public Security University of China. All participants signed an informed consent form prior to their participation in the experiment. We assure you that the data collected will only be used for the purpose of experimental analysis and will not be utilized elsewhere or disclosed to any third parties.
In addition, in order to make the experimental scenario more realistic and to further enhance the tension and excitement of the perpetrators in the experiment, a “task-reward” stimulus was introduced before the formal experiment. A questionnaire was used to explore how much reward could attract the participants to complete the simulated crime tasks on their own. A total of 95 questionnaires were collected, including 47 males and 48 females, and the results showed that the participants’ choices in this case were more in line with experimental expectations: “Choose task A to complete all the processes as required to get paid 30 yuan. Choose task B, you will bear the risk of failure, if you fail you will only get 15 yuan. If you succeed, you will receive the reward amount of 60 yuan.” Task A involves finding the target files required by the opposing company, verifying the matching identification numbers, and secretly carrying the files and USB drive to Room C to complete the rendezvous task, if you choose to disclose the trade secrets. Task B involves refusing the temptation of monetary compensation and not disclosing the target files to the rendezvous personnel, if you choose to reject the salary reward.

3.1.1. Experiment Preparation

According to GKT, the method of free browsing was used, and the types of stimuli included portraits, objects, and scenes. Among them, 4 images were related (1 portrait, 1 object, 2 scenes) and 22 images were unrelated (11 portraits, 5 objects, 6 scenes). This experimental method involved fixing the presentation time for each image and allowing participants to freely explore the pre-determined experimental content while wearing an eye-tracking device. The content was played in a loop from start to finish. The purpose of selecting portraits, objects, and scenes as stimuli was to explore the relationship between eye-movement patterns and identity recognition from multiple perspectives. Participants, during the simulation of a criminal scenario, may exhibit psychological traces due to their nervousness when encountering people, objects, and environments. By presenting these crime-related images, their memories can be triggered, while innocent individuals, who are unaware of the scenario, would not experience heightened emotional tension. By studying different types of stimuli, we extracted eye-movement indicators from participants’ viewing processes of these images, allowing us to obtain a more comprehensive and accurate understanding of the topic.
The instrument used for the experiment was a desktop eye-tracking device from SMI, with a set sampling rate of 120 Hz and a resolution of 1024 × 768. The experimental materials were edited in advance by Experiment Center, data were collected by iView X, and the required data were finally exported using Begaze. We utilized Begaze to open the project file and extract the required eye-movement data by defining areas of interest and performing other operations. In addition, Begaze software facilitated various visualization analyses such as generating gaze heatmaps and bar charts. For our research, after extracting the data using Begaze, we employed common descriptive statistics to calculate the mean, variance, and other relevant measures for each eye-movement indicator category.

3.1.2. Experimental Procedure

Four rooms were set up in this experiment, and participants entered Room A, Room B, Room C, and Room D sequentially during the experiment according to the flow. In Room A, participants completed the registration process, providing relevant information and receiving instructions regarding the experiment’s procedures. Room B served as the primary setting for the simulated experiment. Participants read prompts and selected tasks with varying degrees of risk based on the provided prompts. In Room C, participants located a contact person based on the prompts and delivered the items required for the task, thereby completing the assigned task. Room D involved pre-test inquiries and eye-tracking tests. Room D was used for testing, and before the eye-movement test started, participants were calibrated to ensure that both X- and Y-axis deviations were less than 1° before proceeding. Each picture was presented for 2000 ms and then presented “+” for 500 ms, alternating in turn. The testing phase requires waiting for all the pictures to finish playing. The experimental flow is shown in Figure 1.

3.2. Data Analysis

At the end of the experiment, the experimental data were extracted using Begaze, and a total of 83 valid data points were obtained by combining the participants’ experimental performances and eye-movement records, including 28 valid data points for the perpetrator group, 30 valid data points for the informed group, and 25 valid data points for the innocent group. For a more intuitive representation, we defined each set of data GR.(CTPI), where TPCI all represent different meanings, specifically.
(1) C indicates the category. The innocent group is denoted by C_1, the informed group by C_2, and the crime group by C_3;
(2) T indicates whether the picture is a target picture, i.e., whether it is related to the crime. Target pictures are denoted by T, and non-target pictures are denoted by T’;
(3) P indicates the kind of picture. Portrait pictures are denoted by P_1, object pictures by P_2, and scene pictures by P_3;
(4) I indicates the eye-movement index indicator. f is used for gaze duration, c for gaze frequency, d for pupil diameter, s for eye-hopping frequency, and b for blink frequency. By delineating the area of interest (AOI) in the picture, the participants’ gaze duration, gaze frequency, pupil diameter, blink frequency, and eye beat frequency in the AOI were extracted, and the mean and standard deviation were calculated for each of the five indicators. The experimental results are shown in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. (See Appendix A Table A1, Table A2 and Table A3 for descriptive statistics of the data.)

Analysis of the above Indicators

(1) From the gaze duration indicators, we can see that the people in the crime group generally had more fixation time than the other two groups when viewing the case-involved pictures and showed significant differences in the case-involved and non-case-involved pictures. The informed group also showed a more obvious tendency when viewing the case-involved and non-case-involved pictures.
(2) The fixation count indicators show that when viewing the case-involved pictures, both the crime group and the informed group were higher than the innocent group, and the crime group was more obvious. When viewing the non-case-involved pictures, all three groups did not differ significantly.
(3) The pupil diameter index shows that the innocent, informed, and crime groups showed an increasing trend in both involved and uninvolved pictures, and the crime group had the largest pupil diameter when viewing the involved pictures under the same stimulus conditions.
(4) It can be seen from the blink frequency index that whether pictures and groups were involved in the case on the eye-hopping frequency had no significant difference in results.
(5) The blink frequency index shows that people in the innocent group blinked slightly more frequently than the other groups, whether they were viewing the involved or uninvolved pictures. The crime group, on the other hand, showed a slightly lower blink frequency than the informed and innocent groups. This could also confirm that there was a certain degree of blink inhibition when people lie, and thus blink frequency was lower than normal.

4. An RF-RFE Model Based on RFE Interpretable Machine Learning

4.1. Model Construction

Random forest, proposed by Breiman et al., is constructed using decision trees as the base learner. It consists of multiple decision trees and a large collection of tree-based estimates [32]. During the training process, a portion of the sample set is randomly selected with a release back, and some features are selected for training. The Booststrap training set is generated using the bagging method, and at last, simple voting is used as the basis for classification. Different sample sets are drawn for each tree, and different results are trained. The random forest model construction is roughly divided into four steps: (1) random sampling to train the decision tree; (2) randomly selecting the features of attributes to determine the attributes of node splitting; (3) repeating step 2 until the decision tree grows completely; and (4) building a large number of decision trees until the number meets the design requirements, merging the trained decision trees, and finally constructing the random forest model. The specific establishment process is shown below (see Algorithm 1).
Algorithm 1: Random Forest construction process
Input :   training   set   D = { ( x 1 , y 1 ) ,   ( x 2 , y 2 ) ,…, ( x m , y m ) };
         Attribute   set   A = { a 1 , a 2 , , a d }.
Process: Function TreeGenerate (D, A)
1: Generate node;
2: if The samples in D all belong to the same category C then
3:   Mark node as a class C leaf node; return
4: end if
5: if A = ∅ OR The samples in D take the same value on A then
6:   Mark the node as a leaf node, category is marked as the class with the largest number of samples in D; return
7: end if
8 :   Select   the   optimal   division   attribute   a * from A;
9 :   for   Each   value   a * v     in   a *    do
10 :   Generate   a   branch   for   node ;   let   D v     denote   the   subset   of   samples   of   D   on   a *     that   take   the   value   a * v ;
11 :   if   D v is empty then
12:     Mark the branch node as a leaf node and its class as the class with the most samples in D; return
13:  else
14 : With   TreeGenerate   ( D v   ,   A   \   { a * }) as branch node
15:  end if
16: end for
Output: a decision tree with node as the root node
In this paper, to better explore the effect of these features on the identity recognition model, we refer to the Recursive Feature Elimination (RFE) algorithm, which is one of the wraparound feature selection algorithms. The RFE algorithm trains all feature variables, ranks each feature’s relevance during training, and then cross-validates to determine the original feature subset’s classification accuracy. The lowest-ranked features are then deleted, and the process is repeated with a new feature set. Until the feature subset is empty, the importance rating of all features is completed. The process is shown as follows (see Algorithm 2):
Algorithm 2: RFE algorithm process
1: for the results of each resampling do
2:   Divide the data into training set, test set by resampling;
3:   Train the model in the training set using feature variables;
4:   Evaluate models with test sets;
5:   Calculate and rank the importance of each feature variable;
6:   Remove the least important features;
7: end
8: Decide on the proper number of characteristic variables
9: Estimation of the set of feature variables ultimately used to build the model
When entering the variables into the suspect identification model, the previous definition of the data was combined. We took as input whether participants viewed the picture as a target, the type of picture, and the type of eye-movement indicator. The output of the model is the participant’s group type. That is, we entered G R . ( T P I ) into the model with a total of 30 variables (2 × 3 × 5) according to the meaning of P C I . To determine the importance ranking of the 30 indicators, the experimental 30 variables were ranked using the RF-RFE algorithm, as shown in Table 1.

4.2. Analysis of Results

In the training phase, the eye-movement data of all participants are used as input to construct a classification model, which is used to predict which group a participant belongs to. In which, the number of features is added sequentially according to the feature ranking obtained by the RF-RFE algorithm, and new training tests are performed by adding features successively until all 30 features are added. Combined with the accuracy performance of the previous classifier on the six classifications, in the subsequent RF-RFE operation, we mainly target the last five dichotomies. The specific results are shown in Appendix A Table A4. Several indicators were used to assess the effectiveness of the model when analyzing the model results.
(1) Accuracy (ACC): The confusion matrix can be used to compare the classification results by visualization. The predicted category and true category are represented by rows and columns, respectively, as shown in Figure 7. TP (True positive) means the prediction is true and the actual is true, FP (False positive) means the prediction is true and the actual not, TN (True negative) means the prediction is false and the actual is false, and FN (False negative) means the prediction is false and the actual is true.
The accuracy rate is expressed as the proportion of correctly predicted (whether true or false) samples to the whole sample. It is expressed in the model as the proportion of correctly predicted participant identities to all participants, and its calculation formula is shown in Equation (1).
A c c u r a c y = T P + T N T P + F P + F N + T N
(2) The Kappa coefficient is a measure of classification accuracy based on the confusion matrix, which can be tested for consistency, and its calculation result interval is [−1, 1], but usually takes a value range greater than 0. And the larger the Kappa is, the better the consistency is. The Kappa calculation formula is shown in Equation (2).
K a p p a = p 0 p e 1 p e
Among them, p 0 is the ratio of the sum of the correctly classified sample size to the total, which is the value of ACC. p e supposes the number of actual samples in each category is a1, a2, a3, …, am, the number of samples in each category of the prediction result is b1, b2, b3, …, bm, and the total number of samples is n. Then, p e = a 1 × b 1 + a 2 × b 2 + + a m × b m n × n .
The analysis of Figure 8 leads to the following conclusion. When distinguishing between the innocent and informed groups, the highest model accuracy of 84.8% was achieved with 13 features and 20 features. At the same time, we can see that with the addition of the G R . ( T P 1 I b ) feature, the accuracy of the model improved by about 11%. Back to the data, we can see that the blink frequency feature of the portrait involved in the case is significantly larger in the innocent group than in the informed group. This can corroborate the blink suppression phenomenon [34] and provide a basis for determining whether a person is lying or not.
The analysis of Figure 9 leads to the following conclusion. When distinguishing between the innocent and suspect groups, an accuracy of 85% can be achieved using only three features and 87% using eight features. Although the model achieved a maximum accuracy of 88.4%, the numbers of features used at this point were 22, 28, and 30, and the Kappa coefficient at this point was 0.82, which has a fairly satisfactory degree of agreement (Kappa ≥ 0.8 is generally considered to be almost perfect agreement). The first three features were the fixation time, which indicates that the gaze duration can reflect the familiarity of the participants with the area of interest and the processing load. The accuracy of the model decreased when the fourth feature G R . ( T P 2 I f ) was added. From the data, it can be seen that there is no fixed pattern in the fixation count the three groups gazed at the non-involved items, which led to the decrease in the model accuracy.
The analysis of Figure 10 leads to the following conclusion. The highest accuracy of 91.7% of the model in distinguishing between the innocent and perpetrator group results occurred when 16, 17, 20, and 29 features were used, but the accuracy of the model could reach 86.1% when 8 features were used and the Kappa coefficient was 0.69, and we considered that the effect of classification was highly consistent at this time. (It is generally considered to be highly consistent when 0.6 < Kappa < 0.8).
The analysis of Figure 11 leads to the following conclusion. In distinguishing between the informed and perpetrator groups, the highest accuracy of the model of 85.7% occurred, which required 20 features and did not show better classification when using a smaller number of features. Looking at the mean of and variance in the eye-movement data between the informed and perpetrator groups also revealed that the difference was small, and even the informed group showed a greater overall stimulation than the perpetrator group in the identification of some key crime episodes. The reason for this may be that the stimuli were perceived in accordance with the expectation rather than the actual physical stimuli during the recall process, and the phenomenon of perceptual fixation occurred.
The analysis of Figure 12 leads to the following conclusion. The highest accuracy of the model of 87% occurred when distinguishing between the crime and non-crime groups. With the addition of the first five features in turn, the model effect gradually improved and could reach an accuracy of 85.5%, and the Kappa coefficient was 0.69 at this time. The difference was whether the informed group was included in the classification compared to the innocent and crime groups. The results show that when considering the informed group, although the highest accuracy of the model is reduced, the model can clearly use a smaller number of features to make the identity judgments of the crime and non-crime groups.
The analysis of the ranking of the importance of features shows that features in the category of fixation time and fixation count have better results for identification. This can also be further explained according to the eye-brain hypothesis [35], responding to the processing of that process by the participants; also, We can observe that increasing the number of features does not necessarily result in higher accuracy in model classification.

5. Conclusions

By constructing the RF-RFE model, we can not only obtain the importance ranking of features, but also use the ranking to seek the best model for identification in order to achieve a higher accuracy rate using a smaller number of features.
The results showed that (1) under the simulated crime scenario, in the innocent group and the informed group, when faced with the same pressure, significant differences were observed among different groups in indicators such as scene fixation time, scene fixation count, and portrait pupil diameter. (2) Based on the interpretable machine learning approach, it can be found that the five features, namely, the length of involved object gaze, the length of involved portrait gaze, the length of uninvolved scene fixation time, the number of uninvolved object fixation time, and the length of uninvolved object fixation time, contribute the most to the accuracy of the model’s prediction. The model accuracy can reach 84.8–91.7% for different classification cases. The attentional category indicators can better reflect the differences in participants’ familiarity and processing levels regarding the circumstances involved in the case and can achieve effective identity differentiation.
As for the section on future work, we have also given it careful consideration. This paper primarily focuses on fundamental research. Currently, there is a lack of eye-movement data from individuals involved in real criminal cases for analysis and reference. Conducting research under controlled laboratory conditions makes it challenging to simulate the psychological state following an actual crime, and the effectiveness of this method in practical tests within the field of criminal investigation remains unverified. Furthermore, due to the impact of COVID-19, the experiment could only be conducted within the school premises. Additionally, the participant selection process was limited and the number of participants varied slightly across groups. Therefore, in future work, we can explore the feasibility of using eye-movement indicators to construct identity recognition models by designing more realistic simulated crime scenarios or applying them in actual identification processes. Expanding the range of participant selection would allow for further investigation into the effectiveness of these eye-movement indicators in the identification process.

Author Contributions

Conceptualization, N.D. and X.L.; methodology, X.L.; software, X.L.; validation, X.L., J.S. and C.S.; formal analysis, X.L.; investigation, N.D. and C.S.; resources, N.D.; data curation, X.L.; writing—original draft preparation, X.L.; writing—review and editing, N.D.; visualization, X.L.; supervision, J.S.; project administration, N.D.; funding acquisition, N.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Public Security First-class Discipline Cultivation and Public Safety Behavioral Science Lab Project (No. 2023ZB02) and the National Natural Science Foundation of China (72274208).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Public Security Behavioral Science Lab, People’s Public Security University of China (protocol code 20210302 and approval date 2 March 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Due to the involvement of human participants in the experimental data, the relevant data is treated as confidential.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Means and standard deviations of eye-movement indicators under portrait stimulation.
Table A1. Means and standard deviations of eye-movement indicators under portrait stimulation.
Portrait
CategoryAOIFixation TimeFixation CountPupil DiameterSaccade FrequencyBlink Frequency
InnocentInvolved1201.64 ± 916.053.77 ± 2.683.98 ± 0.592.98 ± 0.770.59 ± 0.29
Uninvolved1128.47 ± 740.794.05 ± 2.713.91 ± 0.533.01 ± 0.340.59 ± 0.22
InformedInvolved1328.44 ± 881.535.03 ± 3.274.31 ± 0.453.16 ± 0.790.43 ± 0.37
Uninvolved1075.04 ± 769.884.21 ± 2.864.06 ± 0.423.07 ± 0.680.47 ± 0.30
CrimeInvolved2117.03 ± 956.417.00 ± 2.884.73 ± 0.472.90 ± 0.940.36 ± 0.30
Uninvolved1231.25 ± 877.164.70 ± 3.164.28 ± 0.383.00 ± 0.690.35 ± 0.21
Table A2. Means and standard deviations of eye-movement indicators under object stimulation.
Table A2. Means and standard deviations of eye-movement indicators under object stimulation.
Object
CategoryAOIFixation TimeFixation CountPupil DiameterSaccade FrequencyBlink Frequency
InnocentInvolved1183.18 ± 648.784.15 ± 2.443.86 ± 0.504.42 ± 0.310.35 ± 0.34
Uninvolved824.53 ± 336.713.44 ± 1.243.90 ± 0.493.38 ± 0.320.45 ± 0.18
InformedInvolved1586.08 ± 840.086.46 ± 3.224.36 ± 0.483.42 ± 0.920.25 ± 0.28
Uninvolved624.63 ± 280.032.79 ± 1.204.50 ± 2.133.32 ± 0.640.34 ± 0.25
CrimeInvolved1214.04 ± 679.005.39 ± 2.924.54 ± 0.423.31 ± 0.960.30 ± 0.27
Uninvolved646.20 ± 274.592.97 ± 1.154.42 ± 0.313.39 ± 0.700.27 ± 0.19
Table A3. Means and standard deviations of eye-movement indicators under scene stimulation.
Table A3. Means and standard deviations of eye-movement indicators under scene stimulation.
Scene
CategoryAOIFixation TimeFixation CountPupil DiameterSaccade FrequencyBlink Frequency
InnocentInvolved368.03 ± 363.731.54 ± 1.164.03 ± 0.603.52 ± 0.660.46 ± 0.24
Uninvolved426.12 ± 150.171.70 ± 0.593.90 ± 0.473.36 ± 0.500.49 ± 0.23
InformedInvolved754.68 ± 328.782.92 ± 1.104.49 ± 0.553.46 ± 0.790.33 ± 0.28
Uninvolved256.37 ± 112.581.15 ± 0.494.21 ± 0.433.42 ± 0.770.33 ± 0.24
CrimeInvolved1363.66 ± 394.625.54 ± 1.644.86 ± 0.423.42 ± 0.600.30 ± 0.21
Uninvolved289.74 ± 141.961.30 ± 0.524.35 ± 0.253.39 ± 0.540.32 ± 0.23
Table A4. Results of RF-RFE model: (a) Innocent and informed groups; (b) innocence and suspicion (informed and crime) groups; (c) innocence and crime groups; (d) informed and crime groups; and (e) crime and non-crime groups.
Table A4. Results of RF-RFE model: (a) Innocent and informed groups; (b) innocence and suspicion (informed and crime) groups; (c) innocence and crime groups; (d) informed and crime groups; and (e) crime and non-crime groups.
(a)
NumberACCKappaNumberACCKappaNumberACCKappa
163.0%0.111173.9%0.332176.1%0.37
252.2%−0.181273.9%0.332280.4%0.45
373.9%0.361384.8%0.602371.7%0.00
473.9%0.331469.6%0.282473.9%0.25
571.7%0.211571.7%0.252580.4%0.42
667.4%0.181673.9%0.252687.0%0.63
771.7%0.251778.3%0.382780.4%0.45
873.9%0.291876.1%0.252878.3%0.38
971.7%0.251982.6%0.472984.8%0.60
1069.6%0.082084.8%0.603080.4%0.48
(b)
NumberACCKappaNumberACCKappaNumberACCKappa
166.7%0.601184.1%0.812185.5%0.84
265.2%0.581279.7%0.752288.4%0.84
385.5%0.821381.2%0.772382.6%0.84
478.3%0.731484.1%0.802485.5%0.84
575.4%0.701584.1%0.802584.1%0.84
684.1%0.801685.5%0.822687.0%0.84
775.4%0.701784.1%0.802787.0%0.84
887.0%0.841881.2%0.772888.4%0.84
979.7%0.751981.2%0.842984.1%0.84
1082.6%0.792082.6%0.843088.4%0.84
(c)
NumberACCKappaNumberACCKappaNumberACCKappa
147.2%−0.131175.0%0.452186.1%0.67
258.3%0.111283.3%0.632288.9%0.75
366.7%0.251363.9%0.002388.9%0.76
475.0%0.451477.8%0.502488.9%0.76
572.2%0.401569.4%0.302586.1%0.68
672.2%0.401691.7%0.822686.1%0.69
785.6%0.561791.7%0.822788.9%0.75
886.1%0.691888.9%0.752886.1%0.69
972.2%0.351988.9%0.752991.7%0.81
1069.4%0.332091.7%0.823086.1%0.69
(d)
NumberACCKappaNumberACCKappaNumberACCKappa
169.6%0.371171.4%0.382180.4%0.56
267.9%0.331269.6%0.332282.1%0.62
376.8%0.501371.4%0.402376.8%0.52
464.3%0.221462.5%0.192483.9%0.65
567.9%0.321567.9%0.302573.2%0.44
676.8%0.501682.1%0.612676.8%0.51
773.2%0.441782.1%0.632782.1%0.62
875.0%0.461883.9%0.662882.1%0.63
975.0%0.461978.6%0.532976.8%0.49
1067.9%0.33 2085.7%0.693082.1%0.62
(e)
NumberACCKappaNumberACCKappaNumberACCKappa
160.9%0.071171.0%0.292185.5%0.65
269.6%0.291275.4%0.392284.1%0.61
371.0%0.321373.9%0.342381.2%0.53
476.8%0.441466.7%0.192482.6%0.57
585.5%0.691569.6%0.262581.2%0.53
685.5%0.341684.1%0.622685.5%0.64
769.6%0.241784.1%0.612782.6%0.58
875.4%0.411885.5%0.652882.6%0.57
965.2%0.201987.0%0.682981.2%0.55
1073.9%0.362084.1%0.623084.1%0.64

References

  1. Shuangqi, L. Identification of Lies of Criminal Suspects in Interrogation. Police Sci. Res. 2021, 176, 41–53. (In Chinese) [Google Scholar]
  2. Trovillo, P.V. History of lie detection. Am. Inst. Crim. L. Criminol. 1938, 29, 848. [Google Scholar] [CrossRef]
  3. Gaggioli, A. Beyond the truth machine: Emerging technologies for lie detection. Cyberpsychol. Behav. Soc. Netw. 2018, 21, 144. [Google Scholar] [CrossRef]
  4. Ben-Shakhar, G.; Furedy, J.J. Theories and Applications in the Detection of Deception: A Psychophysiological and International Perspective; Springer Science & Business Media: New York, NY, USA, 1990. [Google Scholar]
  5. Elaad, E. The challenge of the concealed knowledge polygraph test. Expert Evid. 1998, 6, 161–187. [Google Scholar] [CrossRef]
  6. Mukhra, R.; Krishan, K.; Kanchan, T. Bare footprint metric analysis methods for comparison and identification in forensic examinations: A review of literature. J. Forensic Leg. Med. 2018, 58, 101–112. [Google Scholar] [CrossRef]
  7. SEAK-Expert Witness Directory. Available online: https://www.seakexperts.com/members/8122-michael-s-nirenberg (accessed on 4 February 2018).
  8. Nirenberg, M.S.; Krishan, K.; Kanchan, T. A metric study of insole foot impressions in footwear of identical twins. J. Forensic Leg. Med. 2017, 52, 116–121. [Google Scholar] [CrossRef]
  9. Krishan, K.; Kanchan, T. Identification: Prints-Footprints. In Encyclopedia of Forensic and Legal Medicine, 2nd ed.; Elsevier Inc.: Amsterdam, The Netherlands, 2015; pp. 81–91. [Google Scholar]
  10. Robbins, L.M. Estimating height and weight from size of footprints. J. Forensic Sci. 1986, 31, 143–152. [Google Scholar] [CrossRef]
  11. Bodziak, W.J. Footwear Impression Evidence: Detection, Recovery, and Examination; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  12. Kennedy, R.B.; Chen, S.; Pressman, I.S.; Yamashita, A.B.; Pressman, A.E. A large-scale statistical analysis of barefoot impressions. J. Forensic Sci. 2005, 50, JFS2004277-10. [Google Scholar] [CrossRef]
  13. Jung, J.W.; Bien, Z.; Lee, S.W.; Sato, T. Dynamic-footprint based person identification using mat-type pressure sensor. In Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE Cat. No. 03CH37439), Cancun, Mexico, 17–21 September 2003; Volume 3, pp. 2937–2940. [Google Scholar]
  14. Abdullah, N.A.; Saidi, M.J.; Rahman, N.H.A.; Wen, C.C.; Hamid, I.R.A. Face recognition for criminal identification: An implementation of principal component analysis for face recognition. In AIP Conference Proceedings; AIP Publishing LLC: Melville, NY, USA, 2017; Volume 1891, p. 020002. [Google Scholar]
  15. Kakkar, P.; Sharma, V. Criminal identification system using face detection and recognition. Int. J. Adv. Res. Comput. Commun. Eng. 2018, 7, 238–243. [Google Scholar]
  16. Sivapathasundharam, B.; Prakash, P.A.; Sivakumar, G. Lip prints (cheiloscopy). Indian J. Dent. Res. Off. Publ. Indian Soc. Dent. Res. 2001, 12, 234–237. [Google Scholar]
  17. Dwivedi, N.; Agarwal, A.; Kashyap, B.; Raj, V.; Chandra, S. Latent lip print development and its role in suspect identification. J. Forensic Dent. Sci. 2013, 5, 22. [Google Scholar] [CrossRef] [Green Version]
  18. Penn, D.J.; Oberzaucher, E.; Grammer, K.; Fischer, G.; Soini, H.A.; Wiesler, D.; Novotny, M.V.; Dixon, S.J.; Xu, Y.; Brereton, R.G. Individual and gender fingerprints in human body odour. J. R. Soc. Interface 2007, 4, 331–340. [Google Scholar] [CrossRef] [Green Version]
  19. Cuzuel, V.; Leconte, R.; Cognon, G.; Thiebaut, D.; Vial, J.; Sauleau, C.; Rivals, I. Human odor and forensics: Towards Bayesian suspect identification using GC× GC–MS characterization of hand odor. J. Chromatogr. B 2018, 1092, 379–385. [Google Scholar] [CrossRef]
  20. Papesh, M.H. Source Memory Revealed through Eye Movements and Pupil Dilation. Ph.D. Thesis, Arizona State University, Tempe, AZ, USA, 2012. [Google Scholar]
  21. Walczyk, J.J.; Griffith, D.A.; Yates, R.; Visconte, S.R.; Simoneaux, B.; Harris, L.L. Lie detection by inducing cognitive load: Eye movements and other cues to the false answers of “witnesses” to crimes. Crim. Justice Behav. 2012, 39, 887–909. [Google Scholar] [CrossRef] [Green Version]
  22. Dyer, R. Are You Lying to Me?: Using Nonverbal Cues to Detect Deception. Ph.D. Thesis, Haverford College, Haverford, PA, USA, 2007. [Google Scholar]
  23. Ryan, J.D.; Hannula, D.E.; Cohen, N.J. The obligatory effects of memory on eye movements. Memory 2007, 15, 508–525. [Google Scholar] [CrossRef]
  24. Vrij, A.; Oliveira, J.; Hammond, A.; Ehrlichman, H. Saccadic eye movement rate as a cue to deceit. J. Appl. Res. Mem. Cogn. 2015, 4, 15–19. [Google Scholar] [CrossRef] [Green Version]
  25. Wang, R.; Han, C.; Wu, Y.; Guo, T. Fingerprint classification based on depth neural network. arXiv 2014, arXiv:1409.5188. [Google Scholar]
  26. Li, R. Research on Feature Extraction and Recognition Algorithm of Facial Expression. Master’s Thesis, Chongqing University, Chongqing, China, 2010. (In Chinese). [Google Scholar]
  27. Li, W.; Wen, L.; Chen, Y. Application of improved GA-BP neural network model in property crime prediction. J. Wuhan Univ. (Inf. Sci. Ed.) 2017, 8, 1110–1116. (In Chinese) [Google Scholar]
  28. Gruber, A.; Ben-Gal, I. Using targeted Bayesian network learning for suspect identification in communication networks. Int. J. Inf. Secur. 2018, 17, 169–181. [Google Scholar] [CrossRef]
  29. Gao, Y.; Wang, X.; Chen, Q.; Guo, Y.; Yang, Q.; Yang, K.; Fang, T. Suspects prediction towards terrorist attacks based on machine learning. In Proceedings of the 2019 5th International Conference on Big Data and Information Analytics (BigDIA), Kunming, China, 8–10 July 2019; pp. 126–131. [Google Scholar]
  30. Zemblys, R.; Niehorster, D.C.; Komogortsev, O.; Holmqvist, K. Using machine learning to detect events in eye-tracking data. Behav. Res. Methods 2018, 50, 160–181. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, X.; Liu, L.; Lan, M.; Song, G.; Xiao, L.; Chen, J. Interpretable machine learning models for crime prediction. Comput. Environ. Urban Syst. 2022, 94, 101789. [Google Scholar] [CrossRef]
  32. Nicodemus, K.K. On the stability and ranking of predictors from random forest variable importance measures. Brief. Bioinform. 2011, 12, 369–373. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Townsend, J.T. Theoretical analysis of an alphabetic confusion matrix. Percept. Psychophys. 1971, 9, 40–50. [Google Scholar] [CrossRef]
  34. Leal, S.; Vrij, A. Blinking During and After Lying. J. Nonverbal Behav. 2008, 32, 187–194. [Google Scholar] [CrossRef]
  35. Just, M.A.; Carpenter, P.A. Reading and spatial cognition: Reflections from eye fixations. In Eye Movement Research: Physiological and Psychological Aspects; Luer, G., Lass, U., Shallo-Hoffmann, J., Eds.; Hogrefe: Gollingen, Germany, 1988; pp. 193–213. [Google Scholar]
Figure 1. Experimental procedure.
Figure 1. Experimental procedure.
Behavsci 13 00620 g001
Figure 2. Results of Fixation Time: (a) Scene Fixation Time, (b) Portrait Fixation Time, and (c) Object Fixation Time.
Figure 2. Results of Fixation Time: (a) Scene Fixation Time, (b) Portrait Fixation Time, and (c) Object Fixation Time.
Behavsci 13 00620 g002
Figure 3. Results of Fixation Count: (a) Scene Fixation Count, (b) Portrait Fixation Count, and (c) Object Fixation Count.
Figure 3. Results of Fixation Count: (a) Scene Fixation Count, (b) Portrait Fixation Count, and (c) Object Fixation Count.
Behavsci 13 00620 g003
Figure 4. Results of Pupil Diameter: (a) Scene Pupil Diameter, (b) Portrait Pupil Diameter, and (c) Object Pupil Diameter.
Figure 4. Results of Pupil Diameter: (a) Scene Pupil Diameter, (b) Portrait Pupil Diameter, and (c) Object Pupil Diameter.
Behavsci 13 00620 g004
Figure 5. Results of Saccade Frequency: (a) Scene Saccade Frequency, (b) Portrait Saccade Frequency, and (c) Object Saccade Frequency.
Figure 5. Results of Saccade Frequency: (a) Scene Saccade Frequency, (b) Portrait Saccade Frequency, and (c) Object Saccade Frequency.
Behavsci 13 00620 g005
Figure 6. Results of Blink frequency: (a) Scene Blink Frequency, (b) Portrait Blink Frequency, and (c) Object Blink Frequency.
Figure 6. Results of Blink frequency: (a) Scene Blink Frequency, (b) Portrait Blink Frequency, and (c) Object Blink Frequency.
Behavsci 13 00620 g006
Figure 7. Diagram of confusion matrix [33].
Figure 7. Diagram of confusion matrix [33].
Behavsci 13 00620 g007
Figure 8. Results of the innocent and informed groups.
Figure 8. Results of the innocent and informed groups.
Behavsci 13 00620 g008
Figure 9. Results of innocent and suspect groups.
Figure 9. Results of innocent and suspect groups.
Behavsci 13 00620 g009
Figure 10. Results for the innocent and crime groups.
Figure 10. Results for the innocent and crime groups.
Behavsci 13 00620 g010
Figure 11. Results of the informed and crime groups.
Figure 11. Results of the informed and crime groups.
Behavsci 13 00620 g011
Figure 12. Results for the crime and non-crime groups.
Figure 12. Results for the crime and non-crime groups.
Behavsci 13 00620 g012
Table 1. RF-RFE feature ranking results.
Table 1. RF-RFE feature ranking results.
SequenceVariableSequenceVariableSequenceVariable
1 G R . ( T P 2 I f ) 11 G R . ( T P 3 I c ) 21 G R . ( T P 1 I f )
2 G R . ( T P 1 I f ) 12 G R . ( T P 1 I b ) 22 G R . ( T P 1 I s )
3 G R . ( T P 3 I f ) 13 G R . ( T P 1 I b ) 23 G R . ( T P 2 I b )
4 G R . ( T P 2 I c ) 14 G R . ( T P 1 I d ) 24 G R . ( T P 3 I b )
5 G R . ( T P 2 I f ) 15 G R . ( T P 1 I c ) 25 G R . ( T P 3 I d )
6 G R . ( T P 1 I c ) 16 G R . ( T P 3 I c ) 26 G R . ( T P 3 I d )
7 G R . ( T P 2 I b ) 17 G R . ( T P 3 I s ) 27 G R . ( T P 1 I d )
8 G R . ( T P 2 I d ) 18 G R . ( T P 1 I s ) 28 G R . ( T P 2 I d )
9 G R . ( T P 3 I b ) 19 G R . ( T P 3 I s ) 29 G R . ( T P 2 I s )
10 G R . ( T P 2 I c ) 20 G R . ( T P 2 I s ) 30 G R . ( T P 3 I f )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Ding, N.; Shi, J.; Sun, C. An Identity Recognition Model Based on RF-RFE: Utilizing Eye-Movement Data. Behav. Sci. 2023, 13, 620. https://doi.org/10.3390/bs13080620

AMA Style

Liu X, Ding N, Shi J, Sun C. An Identity Recognition Model Based on RF-RFE: Utilizing Eye-Movement Data. Behavioral Sciences. 2023; 13(8):620. https://doi.org/10.3390/bs13080620

Chicago/Turabian Style

Liu, Xinyan, Ning Ding, Jiguang Shi, and Chang Sun. 2023. "An Identity Recognition Model Based on RF-RFE: Utilizing Eye-Movement Data" Behavioral Sciences 13, no. 8: 620. https://doi.org/10.3390/bs13080620

APA Style

Liu, X., Ding, N., Shi, J., & Sun, C. (2023). An Identity Recognition Model Based on RF-RFE: Utilizing Eye-Movement Data. Behavioral Sciences, 13(8), 620. https://doi.org/10.3390/bs13080620

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop