Next Article in Journal
The Effect of Generation Z’s Understanding of Free Will and Fatalism on Their Political Choice: A Field Study
Previous Article in Journal
Into the Second Decade
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beware, Not Everyone Lies the Same Way! Investigating the Effects of Interviewees’ Profiles and Lie Content on Verbal Cues

by
Nicola Palena
1,2,* and
Francesca De Napoli
1
1
Department of Human and Social Sciences, University of Bergamo, Piazzale S. Agostino, 2, 24129 Bergamo, Italy
2
Netherlands Institute for the Study of Crime and Law Enforcement, De Boelelaan 1077, 1081 HV Amsterdam, The Netherlands
*
Author to whom correspondence should be addressed.
Soc. Sci. 2024, 13(2), 85; https://doi.org/10.3390/socsci13020085
Submission received: 19 December 2023 / Revised: 19 January 2024 / Accepted: 26 January 2024 / Published: 30 January 2024

Abstract

:
Research shows that lying is a common behaviour, and that verbal cues can be effective for lie detection. However, deception detection is not straightforward as there are several factors at play, such as interpersonal differences and the content of the lie. Consequently, the effectiveness of available cues for deception detection can vary significantly. In a pre-registered study involving 80 participants (a priori sample size analyses were conducted), we instructed participants to either tell the truth or lie about an autobiographical event and an opinion. The participants also completed questionnaires on personality traits and cognitive tasks, resulting in two participant clusters. Surprisingly, when analysing verbal behaviour, truthfulness, cluster memberships, and their interactions were not found to be significant. Only lie content affected verbal cues. Additional, non-pre-registered analyses revealed that liars displayed more micro-expressions than truth tellers, but only when describing their memories and when focusing on the latency time between the investigator’s question and the interviewee’s answer. The results were interpreted in terms of the experimental design, which encouraged only short answers from the interviewees, leaving limited room for verbal content to be effective.

1. Introduction

Contrary to the general belief, lie detection is a difficult task, for several reasons (Bond and DePaulo 2006, 2008). For example, people have stereotypes concerning what are the reliable cues to deception and believe that eye aversion is amongst them (The Global Deception Research Team 2006), although academic research shows that this is not the case. Furthermore, we know from the literature that the few cues that are statistically associated with deception are so in a faint and unreliable way (DePaulo et al. 2003). Additionally, lying is not at all stereotypical as different people lie in their own way. For these reasons, scholars began to develop investigative interviewing techniques to elicit larger differences between truth tellers and liars and improve lie detection (Vrij and Granhag 2012)
A widely known interviewing technique is the baseline approach which aims at reducing the effect of interpersonal differences on cues to deception (Vrij 2016). In essence, the baseline approach assumes that if the interrogator can establish how an interviewee behaves when telling the truth, then detecting lies should be easier. The baseline approach requires the interrogator to first establish such a truthful baseline and then start questioning the interrogee about a target topic. According to the approach, if any difference in verbal and/or nonverbal behaviour between the baseline and the target statement appears, then the target statement should be labelled as untruthful. The baseline approach assumes that if the interrogee is honest, there is no reason why they should show any difference between the baseline and the target statement; on the contrary, lying would increase the interrogee’s emotional arousal and cognitive load which, in turn, should cause differences between the baseline and the target parts of the interview.
A reasonable amount of research on the baseline approach has now accumulated, and a general trend appears. Firstly, when small talk is used to obtain a baseline, the baseline approach is completely ineffective as it does not differentiate between truth tellers and liars (Ewens et al. 2014). Second, in order for the baseline to be more effective, it must be similar in content, context, emotional arousal, and cognitive load to the target part of the interview (Ewens et al. 2014; Vrij 2016). When this is the case, the baseline is labelled a Comparable Truth Baseline (CTB). However, studies show that even when a CTB is used, the behaviour of truth tellers does not differ much from that of liars. Palena and colleagues (Palena et al. 2018) collected data from 69 participants and asked them to commit a mock crime. Then, the participants were interviewed via either a small talk or a CTB baseline. The authors then coded the participants’ verbal and nonverbal behaviour and computed a similarity score. The higher the score, the more similar were the participants’ verbal and nonverbal behaviours when comparing the baseline and the target statements. When analysing the data, the authors found that only one of the several cues they studied (the amount of spatial information) could differentiate truth tellers from liars better in the CTB than in the small talk baseline. It follows that the CTB did not appear to be an effective interviewing strategy.
Research has also explored whether a CTB improves observers’ lie detection accuracy compared with the absence of any interviewing technique and found that this appears not to be the case (Caso et al. 2019a, 2019b). The (slight) advantage of observers’ who were shown a CTB compared with those who were not could be caused by a shift in bias. In other words, Caso et al. (2019b) suggested that presenting a CTB to the observers makes them more suspicious and more prone to consider that someone is lying, thus reducing the effect of a truth bias (Levine 2014), rather than making them more accurate at detecting lies. (For other work on the baseline approach, see also Bogaard et al. 2023; Tomas et al. 2021; Verigin et al. 2020).
It is possible that the baseline approach is ineffective because it does not address the role of interpersonal differences efficiently. Indeed, there is a plethora of personal characteristics that impact how people lie, such as their personality, memory performance, intelligence, etc. (Caso et al. 2018). For example, the literature shows that high intelligence, working memory capability, and creativity are all associated with better lie production (Vrij et al. 2010). Similarly, personality factors such as Machiavellianism, extraversion and honesty/humility also influence lying (Palena et al. 2022). However, it is not easy to evaluate the role of interpersonal differences on lie production when using standard analytical procedures.
Scholars usually adopt a variable-centred approach, which focuses on the association between a set of variables (such as veracity and verbal behaviours) on a group level and assumes that the effect under investigation is equal for the entire population. In other words, by using the variable-centred approach, we can only conclude how, on average, people behave when lying. There are other more robust analytical procedures that are useful within deception detection research. The person-centred approach accounts for the specificity of individuals and is based on the assumption that people can be grouped based on their response patterns on preselected variables, with people with similar responding patterns being grouped together in the same profile or cluster (Palena and Caso 2021). The person-centred approach could thus be more suitable for lie detection research than the variable-centred approach as it can shed light on how specific groups of individuals, rather than the whole population, behaves when lying.
In this regard, Palena and colleagues adopted the person-centred approach and grouped their participants based on their personality, Machiavellianism and moral disengagement. These authors found that the four profiles they obtained were associated with, amongst the others, self-assessed lying ability and lying tendency, and lying frequency (Palena et al. 2021b, 2022). Their studies were a first step toward the use of the person-centred approach within lying research but have limited applied value. For lie detection purposes, it is important to explore whether cues to deception differ across different profiles, an aspect that was not explored by Palena and colleagues.
Interpersonal differences are not the only factor that affect lying. Most research on deception focuses on lying about autobiographical events and less focuses on lying about intentions (Calderon et al. 2018; Sooniste et al. 2013) or opinions (Leal et al. 2010). Yet, the content of a statement also plays a relevant role (Levine 2022). For example, it is likely that “spatial details”, one of the strongest verbal cues to deception when assessing autobiographical events (Vrij 2015), may be completely uninformative when the subject discusses an opinion.
Building on the above research literature and gaps, the aim of the present experiment was to evaluate if and how profiles based on personality, working memory capabilities, and creativity moderated the effect of veracity on verbal cues to deception, both when focusing on lying about an event and lying about an opinion. The experimental hypotheses, which were pre-registered together with the experimental methods and procedure before data collection (https://osf.io/atwk6/?view_only=cbe0bd6ae26847639cdb88131ba9c5c9, accessed on 10 February 2023), are reported below.

1.1. Veracity Main Effect

H1a: 
Specific verbal details. Truth tellers will provide more sensory, spatial, temporal, affective, verifiable, and complications details, but they will provide less self-handicapping and common knowledge details than liars.
H1b: 
General impression cues. Truth tellers’ statements will sound clearer, more realistic, more plausible, more immediate, more detailed but less vague than liars’ statements.

1.2. Profiles Main Effect

H2: 
There will be differences in specific verbal cues and general impression cues due to profile membership but, due to the data-driven procedure implied in person-centred approaches, it is not possible to predict the direction of such differences. However, it is expected that participants belonging to profiles marked by high scores on Machiavellianism, creativity, and cognitive performance will appear more credible (e.g., more detailed, clearer, less vague etc.) than participants belonging to profiles with an opposite pattern.

1.3. Main Effect of Content

H3: 
Statements about the event will result in higher scores on specific verbal cues than statements about an opinion.

1.4. Moderating Role of Content

H4: 
There will be an interaction effect between veracity and content whereby the differences between truth tellers and liars when focusing on specific verbal details will be larger for statements about the event than for participants’ opinions.

1.5. Moderating Role of Profiles

H5: 
There will be an interaction effect between veracity and profile membership in that the differences between truth tellers and liars will be larger for some profiles than for others. However, due to the data-driven procedure implied in obtaining the profiles, it is not possible to predict the direction of such interaction. However, it is expected that the more a profile includes characteristics associated with lying (e.g., high creativity, and high cognitive performance), the smaller will be the difference in cues to deception between truth tellers and liars.
H6: 
There will be a three-way interaction between veracity, content, and profile membership.

2. Materials and Methods

2.1. A Priori Sample Size Calculation

To determine the required sample size, two different analyses were conducted: one assuming the emergence of two profiles, and one assuming the emergence of five profiles. No more than five profiles were considered as it is important to obtain a parsimonious number of profiles (Boduszek et al. 2021). The effect size is also difficult to estimate, due to the lack of research in this area. Considering the necessity of potentially relevant practical implications, a medium effect size (Cohen’s f = 0.25) was selected. Power was set at 0.80 and alpha at 0.05. For the analysis assuming two emerging profiles, four groups were entered in GPower (version 3.1) (Faul et al. 2007) (2 profiles multiplied by 2 veracity conditions); for the analysis assuming five emerging profiles, 10 groups were entered. For both analyses, the number of repeated measures (concerning the content of the interview: event vs. opinion) was set at two. The analysis assuming two emerging profiles returned a minimum sample size of 48, whilst the analysis assuming five emerging profiles returned a minimum sample size of 80. For this reason, the aim was to collect data for at least 80 participants by the end of the data collection.

2.2. Participants

The participants’ mean age was 26.86 years (SD = 7.84, min = 19, max = 66). In regard to gender, 46 self-identified as females, 33 as males and 1 decided not to declare their gender identity. Eighteen were cohabiting, ten were engaged, one was divorced and fifty-one were single. Additionally, 21 had a high school diploma, 21 had a bachelor’s degree, and 38 and a master’s degree or a higher title. Lastly, 28 were working-only participants, 22 were students, 25 were working students and 5 did not indicate their working status.

2.3. Procedure

The participants were initially greeted and introduced to the study. They were informed that they would review an informed consent document, which detailed the experimental procedure, including the upcoming interview. After reviewing and accepting the informed consent, participants received detailed instructions regarding the interview task.
The interview task required participants to describe a significant, out of the ordinary event that had occurred within the past year, building on Vrij et al.’s (2017) protocol. They were also asked to share their perspective on euthanasia. The order concerning the content of the interview (past event vs. opinion) was counterbalanced. Furthermore, half of the participants were instructed to be honest during the interview, and half were instructed to lie while being persuasive in their responses to appear credible to the investigators. The interviews were conducted by a trained interviewer who was blind to the experimental conditions and to the aims of the study.
Upon completion of the interview, the participants were informed that the interviewing phase had concluded and were asked to provide fully truthful responses for the following steps. First, they responded to socio-demographic questions and to manipulation check questions (see below). Then, they were subjected to a “digit span test” to evaluate their working memory capacities. After the digit span test, the participants completed a questionnaire including several scales, which are reported below.
Finally, the participants were thanked for their participation. They were requested not to share any details of the experiment with other participants to maintain the study’s integrity.
The above experimental procedure was approved by the ethics committee of the University of Bergamo, nr#5/2022-14/09/22.

2.4. Instruments and Variables

2.4.1. Manipulation Checks

For manipulation check purposes, the participants declared on a Likert scale: (i) how motivated they were to perform well in the experiment (1 = not motivated at all, 5 = very motivated); (ii) how in depth they prepared for the interview (1 = superficially, 5 = in depth); (iii) how good they thought their preparation was (1 = very bad, 5 = very good); (iv) whether they thought their preparation was sufficient or not (1 = insufficient, 5 = sufficient); (v) how much they lied during the interview (0 = did not lie at all, 10 = outright lied), and (vi) how strong their opinion toward euthanasia was (1 = very weak, 5 = very strong).

2.4.2. Personality

The participants were required to complete the Italian version of the Ten-Item Personality Inventory (Chiorri et al. 2015), a short scale measuring the Big Five: extraversion, neuroticism, conscientiousness, agreeableness, and openness to experience. The answers were provided on a Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). The scores of each factor are calculated by averaging the response of the two items measuring it.
The participants were also required to complete the Italian version of the Machiavellianism personality scale (Bianchi and Mirkovic 2020), a 16-item, 5-point Likert scale (1 = strongly disagree, 5 = strongly agree). The scale measures the four dimensions of Machiavellianism (distrust of others, desire for status, desire for control and amoral manipulation), and its total score is obtained by summing all the answers.

2.4.3. Cognitive Performance

The participants’ cognitive performance was measured vie a digit span test and the Cognitive Reflection Task (Frederick 2005). The latter is a psychological test that assesses an individual’s ability to override intuitive, automatic responses and engage in reflective thinking to solve problems. Thus, it typically includes questions designed to elicit incorrect intuitive responses, requiring participants to think more deeply to arrive at the correct answer. The original form is consists of three items. The score is calculated by the number of correct answers and thus ranges from zero to three.
The participants were also administered a digit span test that measures an individual’s working memory capacity by asking them to repeat a series of numbers. The test thus helps evaluate a person’s ability to temporarily hold and manipulate information in their memory.

2.4.4. Creativity

Creativity was measured via the Cognitive Processes Associated with Creativity scale (Miller 2009, 2014), a 28-item measure on a 5-point Likert scale (1 = never, 5 = always). The scale measures six dimensions of creativity (idea manipulation, imagery/sensory, flow, metaphorical/analytical thinking, idea generation, and incubation), and its total score is obtained by averaging the participant’s responses.

2.5. Coding

Verbal Content

For specific details, we built on research on Reality Monitoring (Johnson and Raye 1981; Vrij 2015), the verifiability approach (Nahari et al. 2014; Palena et al. 2021a), and the complication approach (Vrij et al. 2021b). Reality Monitoring is a tool originally developed within memory research but was later applied to lie detection research. It builds on the assumption that experienced events are richer in verbal detail than non-experienced events. It includes spatial details (details concerning places and spatial arrangements of people and objects, e.g., “he sat on my right”), temporal details (information on date, times, and temporal sequences, e.g., “I arrived later than her”), sensory details (details based on the five human senses, e.g., “I saw a black car”), and affective details (information on participants’ affects, e.g., “I was scared”). The verifiability approach, rather than being focused on memory, is based on liars’ strategies. It assumes that the liar will provide details that are difficult to check to avoid being caught lying. Nahari and colleagues (Nahari et al. 2014) suggested that details are verifiable when they leave traces (both digital or analogic, e.g., a CCTV video), are witnessed by an identifiable third person (e.g., “my sister saw me entering the market”), or are carried out with another identifiable person (e.g., “Mark and I went there together”). The complication approach (Vrij et al. 2017, 2021b) is a recently developed framework that considers both cues to truth and cues to deception. It predicts that truth tellers report more complication details (details that make a story more difficult and complicated that it could be, e.g., “We flew from London to Las Vegas via Madrid because we have some friends living in Spain”) than liars, but that liars report more common knowledge (stereotypical information of widely-known situations, e.g., “the first day in Rome we visited the Colosseum”) and self-handicapping details (justifications as to why one cannot report more information, e.g., “I cannot tell much about the incident, it all happened too quickly”) than truth tellers.
For global impressions cues (DePaulo et al. 2003; Vrij 2008), we coded Realism (how realistic the participants’ statement was), Clarity (how clear it sounded), Plausibility (how plausible it was), how immediate the statement appeared, how detailed it was, and how vague it sounded.
In lie detection research, the frequency of specific cues to detail is usually counted. However, as suggested by Vrij et al. (2023), this is often not feasible. It requires the investigator to transcribe the interview and then proceed with the coding. Clearly, this approach is time-consuming and cannot be always applied. For example, some situations require quick, on-the-spot veracity assessments, such as in border control interviews. Similarly, transcribing and coding the interviews is not feasible when resources are limited. In such situations, real-time evaluation is the preferred method. We thus decided to focus on real-time coding. Thus, two trained coders, both with experience in lie detection research, coded the above variables for their presence on a Likert scale ranging from 1 (not at all) to 7 (very much).
Finally, the coders evaluated, through a binary response, whether the interviewees appeared or not to possess the following qualities: extraverted, agreeable, neurotic, open to experience, conscientious, Machiavellian, creative, and smart. This coding was deemed important because it enables evaluation of self-observer agreement on such variables.

3. Results

3.1. Manipulation Checks

First, manipulation checks were conducted. In this regard, the three questions regarding preparation were averaged to obtain a general preparation score (McDonald’s ω = 0.75). The truth tellers’ scores of motivation (M = 4.42; SD = 0.68) and preparation (M = 2.85; SD = 0.96) did not differ from those of liars (motivation M = 4.35; SD = 0.74; preparation M = 2.65; SD = 0.93); motivation t(77.43) = −0.48, p = 0.64, d = −0.11; preparation t(77.00) = −0.94, p = 0.35, d = −0.21. In contrast, the truth tellers (M = 0.28; SD = 0.55) declared to have lied less than liars (M = 8.40; SD = 2.11), t(44.35) = 23.56, p < 0.001, d = 5.27, which indicates that participants adhered to the experimental instructions. Last, the truth tellers’ opinions were stronger (M = 4.55; SD = 0.60) than those of liars (M = 4.08; SD = 0.97), t(64.80) = −2.64, p = 0.01, d = −0.59.

3.2. Data Screening

Skewness and kurtosis of the dependent variables were explored to evaluate whether there was any non-normally distributed variable, based on Kline’s limits of |3| for skewness and |10| for kurtosis (Frederick et al. 2022; Kline 2023). The variable with higher skewness (3.17) and kurtosis (9.99) was spatial details for opinions. However, since it was only slightly beyond the limits for skewness, it was not transformed.
Further, based on Hoaglin and Iglewicz’s (1987) suggestions, we excluded 36 participants whose score on any of the dependent variables was beyond the third quartile plus 2.2 times the interquartile range (IQR) or below the first quartile minus 2.2 times the IQR. The analyses on verbal content were conducted both with and without such outliers, as stated in the pre-registration of this experiment.
Last, the Intraclass Correlation Coefficient (ICC, model 2,1) (Shrout and Fleiss 1979) between the two coders was good as it was always above 0.89, except for the micro-expression of disgust, which was still acceptable (ICC = 0.66 for event, 0.79 for opinion).

3.3. Pre-Registered Analyses

3.3.1. K-Means Cluster Analysis

A k-means cluster analysis was conducted on the participants’ standardized scores of the Big Five, Machiavellianism, CRT, digit span, and creativity. The analyses showed that the data were suitable for cluster analysis (Hopkins’ H = 0.43). The elbow algorithm suggested the presence of two clusters. The participants’ pattern of scores on the above variables is depicted below. As Figure 1 shows, participants in cluster 1 showed high scores on agreeableness and conscientiousness, slightly higher than average scores on creativity and working memory (digit span), low scores on Machiavellianism and extraversion, slightly lower than average scores on neuroticism, and average scores on CRT and openness to experience. Cluster 2 showed the opposite pattern. For the two clusters, the total sum of squares was 711, whereas the between sum of squares was 94.90.

3.3.2. Hypothesis Testing—Complete Dataset

A 2 (Veracity: truth tellers vs. liars, between-subjects) × 2 (Content: autobiographical event vs. opinion, within-subjects) × 2 (Profile: cluster 1 vs. cluster 2) MANOVA was conducted, with the specific cues to deception as dependent variables (sensory, spatial, temporal, affective, verifiable, complication, self-handicapping, and common knowledge details). At a multivariate level, the main effect of content was statistically significant (Pillai’s trace F(8, 69) = 81.38, p < 0.001, ηp2 = 0.90). In contrast, the main effects of veracity (Pillai’s trace F(8, 69) = 0.85, p = 0.56, ηp2 = 0.09) and cluster membership (Pillai’s trace F(8, 69) = 0.88, p = 0.54, ηp2 = 0.09) were statistically non-significant. The two-way interaction effects between veracity and cluster membership (Pillai’s trace F(8, 69) = 0.21, p = 0.99, ηp2 = 0.02), between content and veracity (Pillai’s trace F(8, 69) = 1.41, p = 0.21, ηp2 = 0.14), and between cluster membership and content (Pillai’s trace F(8, 69) = 0.85, p = 0.56, ηp2 = 0.09) were also statistically non-significant. Likewise, the three-way interaction between veracity, cluster membership and content (Pillai’s trace F(8, 69) = 1.17, p = 0.33, ηp2 = 0.12) was statistically non-significant. Therefore, the univariate effects were evaluated only for statement content. The analysis showed that all univariate effects were significant at p < 0.001 (Fs 28.65 ÷ 211.59; ηp2 0.27 ÷ 0.74). Except for complication and common knowledge details, participants reported more detail when describing the event than when describing their opinion (Table 1). A series of Bayesian repeated-measure ANOVA, one for each DV, were conducted. All showed that the best model was the one including only content as a factor, whereas all the other models were less supported by the data (all other models BFs < 1). The only exception to this was for the variable “common knowledge details”, which showed that the best model was the one accounting for the main effects of content and cluster membership (for the model including only content, BF = 0.68; all other BFs < 1). Yet, the evidence in support of the model including the two factors was weak compared with the model including only content (BF = 1.54). The BFs thus support the results obtained via the NHST analyses. Lastly, Table 2 provides a full description of all the specific details split by the experimental groups.
The same analysis as above was conducted with general impression cues as dependent variables (clarity, realism, plausibility, immediacy, detail, and vagueness). Again, at a multivariate level, the main effect of content was statistically significant (Pillai’s trace F(6, 71) = 4.05, p = 0.002, ηp2 = 0.26). In contrast, the main effect of veracity (Pillai’s trace F(6, 71) = 1.99, p = 0.08, ηp2 = 0.14) and cluster membership (Pillai’s trace F(6, 71) = 1.27, p = 0.28, ηp2 = 0.10) were statistically non-significant. Likewise, the two-way interaction effects between veracity and cluster membership (Pillai’s trace F(6, 71) = 1.21, p = 0.31, ηp2 = 0.09), between content and veracity (Pillai’s trace F(6, 71) = 0.92, p = 0.48, ηp2 = 0.07) and between cluster membership and content (Pillai’s trace F(6, 71) = 0.91, p = 0.49, ηp2 = 0.07) were non-significant. The three-way interaction between veracity, cluster membership, and content (Pillai’s trace F(6, 71) = 2.00, p = 0.08, ηp2 = 0.14) was also statistically non-significant. Therefore, the univariate effects were evaluated only for statement content. The analysis showed that only plausibility (F(1, 76) = 4.36, p = 0.04, ηp2 = 0.05) and vagueness (F(1, 76) = 5.49, p = 0.02, ηp2 = 0.07) differed between events and opinions. Participants appeared more plausible (est. marginal mean = 5.99, SE = 0.10) and vague (est. marginal mean = 2.15, SE = 0.15) when describing the event than when sharing their opinions (plausibility est. marginal mean = 5.71, SE = 0.13; vagueness est. marginal mean = 1.76, SE = 0.13). All other effects were not statistically significant (Fs 0.09 ÷ 3.86; p 0.053 ÷ 0.76, ηp2 0.00 ÷ 0.05). Again, a series of Bayesian repeated-measure ANOVA were conducted and showed that for the variables “detail”, “realistic”, “clarity” and “immediacy”, the null models were the best, whereas for the variables “plausibility” and “vagueness”, the best models were the ones including only content as a factor. Again, the BFs supported the analyses conducted via NHST. The above analyses thus only partially supported H3 for specific details. Table 3 provides a full description for all the global details split by the experimental groups.

3.3.3. Hypothesis Testing—Outliers Excluded

An assessment of participants’ scores on the dependent variables showed that there were 36 participants with extreme scores, beyond 2.2 times the IQR. As stated in the pre-registration, the analyses for hypothesis testing were re-conducted without them to test whether the conclusions would remain the same as with the complete dataset.
For specific details at a multivariate level, in addition to the main effect of content (Pillai’s trace F(8, 33) = 34.90, p < 0.001, ηp2 = 0.89), the content by veracity interaction was also statistically significant (Pillai’s trace F(8, 33) = 2.88, p = 0.015, ηp2 = 0.41). All the other multivariate effects were not statistically significant (Fs 0.37 ÷ 1.95; p 0.08 ÷ 0.93, ηp2 0.08 ÷ 0.32). Regarding the univariate effect for content, all effects were again statistically significant at p < 0.001 (Fs 13.77 ÷ 108.83; ηp2 0.26 ÷ 0.73); in contrast, no content by veracity univariate effect was significant at a univariate level (Fs 0.25 ÷ 3.28; p 0.08 ÷ 0.62; ηp2 0.00 ÷ 0.08). As for the complete dataset, the participants scored higher when discussing the events than when discussing the opinions for all variables, except for complications and common knowledge details (Table 4 and Table 5).
For global impression details at a multivariate level, only the main effect of content was significant (Pillai’s trace F(6, 35) = 4.72, p = 0.001, ηp2 = 0.45). All the other multivariate effects were not statistically significant (Fs 0.88 ÷ 2.44; p 0.045 ÷ 0.52, ηp2 0.13 ÷ 0.29). For the univariate effects of content, a significant difference was reached only for the variable vagueness (F(1, 40) = 11.45, p = 0.002, ηp2 = 0.22). All other variables were not statistically significant (Fs 0.83 ÷ 1.79; p 0.19 ÷ 0.45, ηp2 0.01 ÷ 0.04). Participants appeared vaguer when discussing the autobiographical event (est. marginal mean = 2.53, SE = 0.22) than when sharing their opinions (est. marginal mean = 1.77, SE = 0.11). In summary, the analyses without the outliers also provided only partial support for H3 for specific details. Table 6 provides a full description for all the global details split by the experimental groups when excluding outliers.

3.3.4. Self-Observer Agreement

As stated in the pre-registration, we tested whether the interviewee’s own answers on the questionnaire about their personality, cognitive performance, and creativity correlated with the observer’s ratings. In this regard, the coders binary coded whether the interviewees appeared extraverted, neurotic, agreeable, conscientious, open to experience, Machiavellian, creative, and smart. The only significant point-biserial correlation was for extraversion (r = 0.23, p = 0.04). This indicates that observers might find it difficult to correctly identify the interviewees’ personality, creativity, and cognitive abilities.

3.3.5. Unregistered Analyses

The pre-registered analyses did not support most of our predictions and experimental hypotheses. This could be because our experimental design encouraged only short answers from the interviewees and left limited room for verbal content to be effective. When this is the case, nonverbal behaviour can potentially be used by the investigator, although its limitations within the investigative interview field are well recognised. In the words of Vrij et al. (2019), “researchers should focus on examining nonverbal behaviours in settings where there is no alternative to making nonverbal veracity assessment” (p. 312). One type of nonverbal cue that has had a wide impact and received much public attention is the presence of micro-expressions. These are fleeting emotional facial expressions that, according to the leakage theory (Ekman and Friesen 1969), leak from the interviewee, who cannot control them, and last less than 1/25th of a second. However, researchers are sceptical about the efficacy of micro-expressions for lie detection purposes, as this assumption has not been supported by evidence. For example, Porter and ten Brinke (2008) found that only 2% of all the videos they analysed depicted such expressions, and almost half of them were shown by truth tellers rather than liars. Burgoon (2018) also argued that micro-expressions might be ineffective and that it is more likely their absence, rather than their presence, is indicative of lying, yet to the best of our knowledge, this assumption has not been tested directly.
We fully agree with the above and believe that micro-expressions are not effective for lie detection purposes, yet we decided to analyse these facial expressions. There were two main reasons: First, we wanted to test the hypothesis that liars show fewer such expressions than truth tellers, rather than the other way around, during their interview. Second, research on micro-expressions usually focuses on their presence during an interview. However, people may use strategies to suppress leaking expressions during an interview, thus showing them only rarely, as found by Porter and ten Brinke (2008). Yet, to the best of our knowledge, it has never been explored whether such micro-expressions leak from the interviewee in the latency time between the end of the interviewer’s question and the start of the interviewee’s answer. In essence, we were interested in exploring whether the micro-expressions are shown more often by liars than truth tellers before the interviewee starts to respond to the question, rather than during the response itself. The self-presentation theory by DePaulo et al. (1996) posits that individuals control their behaviour to manage the way others perceive them and to appear credible. This might be easier while talking, as speech can be used to persuade others of one’s honesty, than before answering a question, especially when this is unexpected (Vrij et al. 2018).
To explore the above, the same two coders that coded verbal content, coded the presence of micro-expression with a Likert scale ranging from 1 (not present at all) to 7 (very much present) for seven emotions: anger, fear, disgust, sadness, contempt, surprise, and happiness. The scores of the seven emotions were then averaged to obtain how much, on average, each participant showed micro-expressions during the interview. Further, the coders also coded whether, in the latency time that occurred between the end of the interviewer’s question and the start of the interviewee’s answer, any micro-expression appeared.
A 2 (Veracity: truth tellers vs. liars, between-subjects) × 2 (Content: autobiographical event vs. opinion, within-subjects) × 2 (Profile: cluster 1 vs. cluster 2) ANOVA was conducted, with the presence of micro-expressions during the interview as the dependent variable. For the complete dataset including the outliers, only the content by veracity effect was significant at a multivariate level (Pillai’s trace F(1, 76) = 5.16, p = 0.02, ηp2 = 0.06). All other effects were non-significant (Fs 0.002 ÷ 1.70; p 0.20 ÷ 0.96, ηp2 0.00 ÷ 0.02). Follow-up analyses showed that no significant difference appeared between truth tellers and liars when focusing on opinion statements (t(62.23) = 1.48, p = 0.07, Hedges’ g = 0.38, (truth tellers M = 1.20, SD = 0.46; liars M = 1.07, SD = 0.27), one-tailed). In contrast, such a comparison was significant when focusing on the event statements (t(62.16) = −1.78, p = 0.04, Hedges’ g = 0.32, one-tailed). Truth tellers showed, on average, fewer micro-expressions (M = 1.05, SD = 0.22) than liars (M = 1.17, SD = 0.38). This apparently supports the micro-expressions hypothesis, but it is paramount to underline that even though liars showed more micro-expression than truth tellers, they still showed almost none as their mean was close to the lowest score of the Likert scale, indicating that micro-expressions were “not present at all” (score 1, see above). The ANOVA could not be conducted without outliers as in this latter case, all participants showed no micro-expressions. Table 7 provides the means and standard deviations of micro-expression scores accounting for the independent variables (complete dataset).
To analyse the appearance of micro-expressions during the latency time, a chi-square test of independence was conducted, with veracity (truth tellers vs. liars) and micro-expressions (present vs. absent) as the two variables of interest. Regarding the complete dataset, when focusing on event statements, no significant result appeared (χ2(1) = 0.83, p = 0.36). In contrast, when focusing on opinion statements, the effect was significant (χ2(1) = 8.35, p = 0.004). Among the liars, 36 showed micro-expressions in the latency time, whereas 4 did not; among the truth tellers, 25 showed micro-expressions whereas 15 did not. A similar result was obtained on the reduced dataset, without the outliers: event statements (χ2(1) = 0.30, p = 0.59), and opinion statements (χ2(1) = 5.13, p = 0.02). In this dataset, 19 liars showed micro-expressions and 2 did not; 14 truth tellers showed micro-expressions and 9 did not.
In summary, the above analyses support the idea that micro-expressions are not useful for lie detection, as they were almost completely absent during the statement and both truth tellers and liars showed them in the latency time between the investigator’s question and the interviewee’s answer.

4. Discussion

In this experiment, we tested the assumption that the effect of veracity (i.e., telling the truth vs. lying) on cues to deception is moderated by both the content of the statement and interpersonal differences, two important aspects that are usually overlooked in the academic literature. Indeed, to improve deception detection, the search for cues should be tailored to the specific interviewee and statement content.
We found, however, that our hypotheses only received little support. Verbal content did not statistically differ between truth tellers and liars, nor between the two profiles we obtained. Also, profile membership (the individual characteristics of the interviewee) did not moderate the association between veracity and cues to deception. Our results go against previous literature showing that veracity is associated with verbal content (Vrij 2015) and that profile membership does affect lying (Palena et al. 2021b, 2022). Further, due to the lack of a moderating role of profile membership, our results seem to suggest that the difference in verbal cues between truth tellers and liars is not affected by individual characteristics. Such a result could be seen as positive as it implies that the interrogator should not worry about selecting a specific set of cues for a specific individual. However, this clashes with the theoretical assumption that individual characteristics do play a role in lying (Vrij 2016). Yet, our results should not discourage further efforts to explore the role of interpersonal differences as the fact that we did not find a moderating role of profiles can be explained on the basis of our experimental design. Our participants were asked to provide two free recalls, and in both cases, no specific technique was applied to increase information elicitation. This is an important limitation as we know from the literature that interviewees rarely have a clear idea of how much detail is expected from them and tend to report only the details that they believe might be the more relevant, overlooking others. To address this issue, specific techniques have been developed to encourage interviewees to tell more, such as the provision of a model statement (Leal et al. 2015; Porter et al. 2018). As a consequence, we cannot exclude that the lack of a moderating role of profiles (hence of interpersonal difference) is due to our participants providing short answers and thus leaving little room for any cue to deception to be effective, an interpretation that is also corroborated by a lack of a veracity effect. In this regard, future studies should try to replicate our experiment when specific interviewing techniques aiming at increasing information elicitation (e.g., model statement) are applied. Furthermore, it cannot be excluded that our results were influenced by the variables we selected to obtain the profile. Although we started from previous research on what characteristics make people good liars (Vrij et al. 2010), they might be irrelevant in the context of the present experiment (i.e., short answers from the participants). Hence, future studies should first explore what individual characteristics influence cues to deception more, and only then aim at obtaining profile membership based on such variables.
We also found, as expected, that the content of the statements had an effect on verbal cues to deception. Statements about the autobiographical events scored higher on Reality Monitoring and Verifiable details than opinion statements, which makes sense when considering that the latter are unlikely to contain such details. Additionally, opinion statements contained more common knowledge and complication details, which might at first sight appear unexpected. In hindsight, though, it is possible that this was driven by the topic of the opinion statements. Euthanasia is a delicate and complex topic that can be considered from several perspectives (philosophical, psychological, relational, etc.). This might lead to greater amounts of complication and common knowledge details. Regarding the latter, it is possible that our participants had a tendency to report stereotypical beliefs and reasons (i.e., common knowledge details) in support of or against euthanasia, especially, if they are not willing to add their own personal beliefs in this regard. Yet, without experimental testing, this interpretation remains speculative; hence, further research is needed in this regard.
In any case, the above results make it clear that if the veracity of statements whose content does not focus on past event need to be assessed, scholars and practitioners should either develop new suitable verbal cues or focus on global impression cues. The latter, indeed, are based on a holistic evaluation of a statement and do not focus on specific details. Any statement can be coded in terms of plausibility (Vrij et al. 2021a) and detailedness, and the latter was recently shown to be a strong predictor of observers’ lie detection accuracy, even when considered alone (Verschuere et al. 2023). Yet, although such global cues are less affected by the content of the statement than, for example, Reality Monitoring criteria, they should be refined and adapted to the content of the statement. In essence, although these cues can be applied to any type of content, scholars should make a shared effort in clarifying how they should be defined in each specific context. For example, “vagueness” can be defined as “not reporting precise details about people, location, and action” for event statements but as “not showing strong personal beliefs and/or emotional involvement” for opinion statements.
We also explored the role of micro-expressions due to their popularity within the academic and popular literature. Although we found a significant difference between truth tellers and liars in the number of micro-expressions, there was a floor effect. In essence, both truth tellers and liars produced almost no micro-expressions during the interview. This goes against Ekman’s assumption (Ekman and Friesen 1969; Ekman 2001), but corroborated other work which found that micro-expressions are ineffective in detecting lies (Burgoon 2018; Porter and ten Brinke 2008). Furthermore, we also explored whether participants showed micro-expressions before they started answering the interviewer. This was based on the assumption that the emotional activation for liars is stronger in that timeframe than during the interview when they can use language to persuade the interviewer that they are telling the truth. To the best of our knowledge, this has never been tested before. Our results showed that there was indeed an association between veracity and the presence of micro-expressions during reaction times, which may provide the basis for new experiments investigating this aspect. However, this was only true for opinion statements. The results can be interpreted on the basis of the emotions involved in the two statements. It is likely that the participants experienced a higher emotional activation when discussing a sensitive topic such as euthanasia than when describing a past, memorable event. This makes sense, especially when considering that all participants scored high on the strengths of their feeling toward euthanasia. Yet, we did not directly assess the participants’ emotional arousal and this should be examined in future studies. Furthermore, it is paramount to note that although there was an association between the presence (or not) of micro-expressions during reaction times and veracity, truth tellers also showed such expressions, meaning that assessing interviewees’ veracity based on the presence or not of micro-expressions during reaction times is inadvisable. Future research should also explore this question.
Our experiment was not free from limitations. As previously stated, it elicited only short answers from the interviewees. Future studies should adapt our design by applying specific interviewing techniques to encourage interviewees to be more forthcoming. Also, we coded verbal cues that can be present only when the interviewee is talkative. Yet, as in our case, participants might be unwilling to talk; hence, future studies should also explore new verbal and nonverbal cues that can be applied in such contexts, which is particularly relevant for practitioners (Vrij et al. 2023).

Author Contributions

Conceptualization, N.P.; Methodology, N.P. and F.D.N.; Formal analysis, N.P. and F.D.N.; Investigation, F.D.N.; Data curation, F.D.N.; Writing—original draft, N.P. and F.D.N.; Writing—review & editing, N.P. and F.D.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, with the guidelines provided by the Italian Association of Psychology, and approved by the Institutional Review Board (or Ethics Committee) of the university of Bergamo (protocol code 05/2022, approved on 14 September 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data will be made available by the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bianchi, Renzo, and Danijela Mirkovic. 2020. Is Machiavellianism associated with depression? A cluster-analytic study. Personality and Individual Differences 152: 109594. [Google Scholar] [CrossRef]
  2. Boduszek, Daniel, Agata Debowska, Nicole Sherretts, Dominic Willmott, Mike Boulton, Krzysztof Kielkiewicz, Katarzyna Popiolek, and Philip Hyland. 2021. Are Prisoners More Psychopathic than Non-forensic Populations? Profiling Psychopathic Traits among Prisoners, Community Adults, University Students, and Adolescents. Deviant Behavior 42: 232–44. [Google Scholar] [CrossRef]
  3. Bogaard, Glynis, Ewout H. Meijer, Aldert Vrij, and Galit Nahari. 2023. Detecting deception using comparable truth baselines. Psychology, Crime & Law 29: 567–83. [Google Scholar] [CrossRef]
  4. Bond, Charles F., Jr., and Bella M. DePaulo. 2006. Accuracy of deception judgments. Personality and Social Psychology Review 10: 214–34. [Google Scholar] [CrossRef] [PubMed]
  5. Bond, Charles F., Jr., and Bella M. DePaulo. 2008. Individual differences in judging deception: Accuracy and bias. Psychological Bulletin 134: 477. [Google Scholar] [CrossRef]
  6. Burgoon, Judee K. 2018. Microexpressions Are Not the Best Way to Catch a Liar. Frontiers in Psychology 9: 1672. [Google Scholar] [CrossRef]
  7. Calderon, Sofia, Erik Mac Giolla, Karl Ask, and Pär Anders Granhag. 2018. Drawing what lies ahead: False intentions are more abstractly depicted than true intentions. Applied Cognitive Psychology 32: 518–22. [Google Scholar] [CrossRef] [PubMed]
  8. Caso, Letizia, Fridanna Maricchiolo, Stefano Livi, Aldert Vrij, and Nicola Palena. 2018. Factors affecting Observers’ Accuracy when Assessing Credibility: The Effect of the Interaction between Media, Senders’ Competence and Veracity. The Spanish Journal of Psychology 21: E49. [Google Scholar] [CrossRef]
  9. Caso, Letizia, Nicola Palena, Aldert Vrij, and A. Gnisci. 2019a. Observers’ performance at evaluating truthfulness when provided with Comparable Truth or Small Talk Baselines. Psychiatry, Psychology and Law 26: 571–579. [Google Scholar] [CrossRef]
  10. Caso, Letizia, Nicola Palena, Elga Carlessi, and Aldert Vrij. 2019b. Police accuracy in truth/lie detection when judging baseline interviews. Psychiatry, Psychology and Law 26: 841–850. [Google Scholar] [CrossRef]
  11. Chiorri, Carlo, Fabrizio Bracco, Tommaso Piccinno, Cinzia Modafferi, and Valeria Battini. 2015. Psychometric properties of a revised version of the Ten Item Personality Inventory. European Journal of Psychological Assessment 31: 109–19. [Google Scholar] [CrossRef]
  12. DePaulo, Bella M., Deborah A. Kashy, Susan E. Kirkendol, Melissa M. Wyer, and Jennifer A. Epstein. 1996. Lying ineveryday life. Journal of Personality and Social Psychology 70: 979–95. [Google Scholar] [CrossRef]
  13. DePaulo, Bella M., James J. Lindsay, Brian E. Malone, Laura Muhlenbruck, Kelly Charlton, and Harris Cooper. 2003. Cues to deception. Psychological Bulletin 129: 74–118. [Google Scholar] [CrossRef] [PubMed]
  14. Ekman, Paul. 2001. Telling Lies. Clues to Deceit in the Marketplace, Politics, and Marriage. New York: Norton. [Google Scholar]
  15. Ekman, Paul, and Wallace V. Friesen. 1969. Nonverbal leakage and clues to deception. Psychiatry 32: 88–106. [Google Scholar] [CrossRef] [PubMed]
  16. Ewens, Sarah, Aldert Vrij, Minhwan Jang, and Eunkyung Jo. 2014. Drop the small talk when establishing baseline behaviour in interviews. Journal of Investigative Psychology and Offender Profiling 11: 244–52. [Google Scholar] [CrossRef]
  17. Faul, Franz, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods 39: 175–91. [Google Scholar] [CrossRef] [PubMed]
  18. Frederick, David A., Tracy L. Tylka, Rachel F. Rodgers, Jamie-Lee Pennesi, Lexie Convertino, Michael C. Parent, Tiffany A. Brown, Emilio J. Compte, Catherine P. Cook-Cottone, Canice E. Crerand, and et al. 2022. Pathways from sociocultural and objectification constructs to body satisfaction among women: The U.S. Body Project I. Body Image 41: 195–208. [Google Scholar] [CrossRef] [PubMed]
  19. Frederick, Shane. 2005. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19: 25–42. [Google Scholar] [CrossRef]
  20. Hoaglin, David C., and Boris Iglewicz. 1987. Fine-Tuning Some Resistant Rules for Outlier Labeling. Journal of the American Statistical Association 82: 1147–49. [Google Scholar] [CrossRef]
  21. Johnson, Marcia. K., and Carol. L. Raye. 1981. Reality monitoring. Psychological Review 88: 67–85. [Google Scholar] [CrossRef]
  22. Kline, Rex B. 2023. Principles and Practice of Structural Equation Modeling. New York: Guilford publications. [Google Scholar]
  23. Leal, Sharon, Aldert Vrij, Lara Warmelink, Zarah Vernham, and Ronald P. Fisher. 2015. You cannot hide your telephone lies: Providing a model statement as an aid to detect deception in insurance telephone calls. Legal and Criminological Psychology 20: 129–46. [Google Scholar] [CrossRef]
  24. Leal, Sharon, Aldert Vrij, Samantha Mann, and Ronald P. Fisher. 2010. Detecting true and false opinions: The Devil’s Advocate approach as a lie detection aid. Acta Psychologica 134: 323–29. [Google Scholar] [CrossRef]
  25. Levine, Timothy R. 2014. Truth-Default Theory (TDT):A Theory of Human Deception and Deception Detection. Journal of Language and Social Psychology 33: 378–92. [Google Scholar] [CrossRef]
  26. Levine, Timothy R. 2022. Content, context, cues, and demeanor in deception detection. Frontiers in Psychology 13: 988040. [Google Scholar] [CrossRef]
  27. Miller, Angela L. 2009. Cognitive Processes Associated with Creativity: Scale Development and Validation. Ph.D. thesis, Ball State University, Muncie, IN, USA. [Google Scholar]
  28. Miller, Angie L. 2014. A Self-Report Measure of Cognitive Processes Associated with Creativity. Creativity Research Journal 26: 203–18. [Google Scholar] [CrossRef]
  29. Nahari, Galit, Aldert Vrij, and Ronald P. Fisher. 2014. Exploiting liars’ verbal strategies by examining the verifiability of details. Legal and Criminological Psychology 19: 227–39. [Google Scholar] [CrossRef]
  30. Palena, Nicola, and Letizia Caso. 2021. Investigative Interviewing Research: Ideas and Methodological Suggestions for New Research Perspectives. Frontiers in Psychology 12: 715028. [Google Scholar] [CrossRef]
  31. Palena, Nicola, Letizia Caso, Aldert Vrij, and Galit Nahari. 2021a. The Verifiability Approach: A Meta-Analysis. Journal of Applied Research in Memory and Cognition 10: 155–66. [Google Scholar] [CrossRef]
  32. Palena, Nicola, Letizia Caso, Aldert Vrij, and Robin Orthey. 2018. Detecting deception through small talk and comparable truth baselines. Journal of Investigative Psychology and Offender Profiling 15: 124–32. [Google Scholar] [CrossRef]
  33. Palena, Nicola, Letizia Caso, Lucrezia Cavagnis, and Andrea Greco. 2021b. Profiling the Interrogee: Applying the Person-Centered Approach in Investigative Interviewing Research. Frontiers in Psychology 12: 722893. [Google Scholar] [CrossRef] [PubMed]
  34. Palena, Nicola, Letizia Caso, Lucrezia Cavagnis, Andrea Greco, and Aldert Vrij. 2022. Exploring the relationship between personality, morality and lying: A study based on the person-centred approach. Current Psychology 42: 20502–14. [Google Scholar] [CrossRef]
  35. Porter, Cody Normitta, Aldert Vrij, Sharon Leal, Zarah Vernham, Giacomo Salvanelli, and Niall McIntyre. 2018. Using Specific Model Statements to Elicit Information and Cues to Deceit in Information-Gathering Interviews. Journal of Applied Research in Memory and Cognition 7: 132–42. [Google Scholar] [CrossRef]
  36. Porter, Stephen, and Leanne ten Brinke. 2008. Reading between the Lies: Identifying Concealed and Falsified Emotions in Universal Facial Expressions. Psychological Science 19: 508–14. [Google Scholar] [CrossRef]
  37. Shrout, Patrick E., and Joseph L. Fleiss. 1979. Intraclass correlations: Uses in assessing rater reliability. Psychological bulletin 86: 420–28. [Google Scholar] [CrossRef]
  38. Sooniste, Tuule, Pär Anders Granhag, Melanie Knieps, and Aldert Vrij. 2013. True and false intentions: Asking about the past to detect lies about the future. Psychology, Crime & Law 19: 673–85. [Google Scholar]
  39. The Global Deception Research Team. 2006. A world of lies. Journal of Cross-Cultural Psychology 37: 60–74. [Google Scholar] [CrossRef]
  40. Tomas, Frédéric, Olivier Dodier, and Samuel Demarchi. 2021. Baselining affects the production of deceptive narratives. Applied Cognitive Psychology 35: 300–7. [Google Scholar] [CrossRef]
  41. Verigin, Brianna L., Ewout H. Meijer, and Aldert Vrij. 2020. A within-statement baseline comparison for detecting lies. Psychiatry, Psychology and Law 28: 94–103. [Google Scholar] [CrossRef]
  42. Verschuere, Bruno, Chu-Chien Lin, Sara Huismann, Bennett Kleinberg, Marleen Willemse, Emily Chong Jia Mei, Thierry van Goor, Leonie H. S. Löwy, Obed Kwame Appiah, and Ewout Meijer. 2023. The use-the-best heuristic facilitates deception detection. Nature Human Behaviour 7: 718–28. [Google Scholar] [CrossRef]
  43. Vrij, Aldert. 2008. Detecting Lies and Deceit: Pitfalls and Opportunities, 2nd ed. Chichester: John Wiley and Sons. [Google Scholar]
  44. Vrij, Aldert. 2015. Verbal lie detection tools: Statement validity analysis, reality monitoring and scientific content analysis. In Detecting Deception: Current Challenges and Cognitive Approaches. Edited by Pär Anders Granhag, Aldert Vrij and Bruno Verschuere. Chichester: John Wiley and Sons, pp. 3–35. [Google Scholar]
  45. Vrij, Aldert. 2016. Baselining as a Lie Detection Method. Applied Cognitive Psychology 30: 1112–19. [Google Scholar] [CrossRef]
  46. Vrij, Aldert, and Pär Anders Granhag. 2012. Eliciting cues to deception and truth: What matters are the questions asked. Journal of Applied Research in Memory and Cognition 1: 110–17. [Google Scholar] [CrossRef]
  47. Vrij, Aldert, Haneen Deeb, Sharon Leal, Pär-Anders Granhag, and Ronald P. Fisher. 2021a. Plausibility: A verbal cue to veracity worth examining? European Journal of Psychology Applied to Legal Context 13: 47–53. [Google Scholar] [CrossRef]
  48. Vrij, Aldert, Maria Hartwig, and Pär Anders Granhag. 2019. Reading Lies: Nonverbal Communication and Deception. Annual Review of Psychology 70: 295–317. [Google Scholar] [CrossRef] [PubMed]
  49. Vrij, Aldert, Nicola Palena, Sharon Leal, and Letizia Caso. 2021b. The Relationship between Complications, Common Knowledge Details and Self-Handicapping Strategies and Veracity: A Meta-Analysis. The European Journal of Psychology Applied to Legal Context 13: 55–77. [Google Scholar] [CrossRef]
  50. Vrij, Aldert, Pär Anders Granhag, and Samantha Mann. 2010. Good Liars. The Journal of Psychiatry & Law 38: 77–98. [Google Scholar] [CrossRef]
  51. Vrij, Aldert, Ronald P. Fisher, and Sharon Leal. 2023. How researchers can make verbal lie detection more attractive for practitioners. Psychiatry, Psychology and Law 30: 383–396. [Google Scholar] [CrossRef]
  52. Vrij, Aldert, Sharon Leal, Samantha Mann, Gary Dalton, Eunkyung Jo, Alla Shaboltas, Maria Khaleeva, Juliana Granskaya, and Kate Houston. 2017. Using the model statement to elicit information and cues to deceit in interpreter-based interviews. Acta Psychologica 177: 44–53. [Google Scholar] [CrossRef]
  53. Vrij, Aldert, Sharon Leal, Samantha Mann, Ronald P. Fisher, Gary Dalton, Eunkyung Jo, Alla Shaboltas, Maria Khaleeva, Juliana Granskaya, and Kate Houston. 2018. Using unexpected questions to elicit information and cues to deceit in interpreter-based interviews. Applied Cognitive Psychology 32: 94–104. [Google Scholar] [CrossRef]
Figure 1. Patterns of scores for the two clusters.
Figure 1. Patterns of scores for the two clusters.
Socsci 13 00085 g001
Table 1. Estimated marginal means and standard errors for specific details according to statement content (complete dataset).
Table 1. Estimated marginal means and standard errors for specific details according to statement content (complete dataset).
Est. Marginal Means (SE)EventOpinion
Sensory5.92 (0.15)4.14 (0.21)
Spatial3.73 (0.18)1.14 (0.05)
Temporal3.18 (0.15)1.12 (0.03)
Affective5.06 (0.16)4.20 (0.14)
Verifiable2.10 (0.13)1.32 (0.08)
Complications1.54 (0.12)2.80 (0.13)
Self-handicapping2.89 (0.19)1.57 (0.12)
Common knowledge1.48 (0.10)3.51 (0.12)
Table 2. Descriptives for specific details split by the independent variables (complete dataset).
Table 2. Descriptives for specific details split by the independent variables (complete dataset).
ContentVeracityCluster MembershipOutcome VariableM (SD)
EventTruth tellers1Sensory5.47 (1.65)
Spatial3.21 (1.62)
Temporal2.79 (1.23)
Affective5.00 (1.60)
Verifiable2.11 (1.45)
Complications1.16 (0.50)
Self-handicapping2.79 (1.40)
Common knowledge1.26 (0.56)
2Sensory6.14 (1.06)
Spatial3.81 (1.66)
Temporal3.48 (1.63)
Affective5.57 (1.25)
Verifiable2.24 (1.09)
Complications1.62 (0.92)
Self-handicapping2.71 (1.82)
Common knowledge1.57 (1.12)
Liars1Sensory6.18 (1.07)
Spatial3.94 (1.39)
Temporal3.18 (1.01)
Affective5.00 (1.58)
Verifiable2.29 (1.40)
Complications1.76 (1.15)
Self-handicapping3.18 (1.51)
Common knowledge1.29 (0.85)
2Sensory5.87 (1.42)
Spatial3.96 (1.58)
Temporal3.26 (1.18)
Affective4.65 (1.34)
Verifiable1.78 (0.80)
Complications1.61 (1.44)
Self-handicapping2.87 (1.87)
Common knowledge1.78 (1.00)
OpinionTruth tellers1Sensory4.53 (2.06)
Spatial1.21 (0.42)
Temporal1.26 (0.45)
Affective4.47 (1.43)
Verifiable1.37 (0.46)
Complications2.84 (1.01)
Self-handicapping1.84 (1.34)
Common knowledge3.26 (1.19)
2Sensory4.05 (1.86)
Spatial1.00 (0.00)
Temporal1.10 (0.30)
Affective4.10 (1.22)
Verifiable1.29 (0.78)
Complications3.05 (1.07)
Self-handicapping1.62 (0.92)
Common knowledge3.81 (0.87)
Liars1Sensory4.06 (2.08)
Spatial1.18 (0.53)
Temporal1.12 (0.33)
Affective3.88 (1.17)
Verifiable1.24 (0.56)
Complications2.53 (1.12)
Self-handicapping1.41 (0.94)
Common knowledge3.41 (1.06)
2Sensory3.91 (1.50)
Spatial1.17 (0.49)
Temporal1.00 (0.00)
Affective4.35 (1.11)
Verifiable1.39 (0.58)
Complications2.78 (1.24)
Self-handicapping1.39 (0.99)
Common knowledge3.57 (1.12)
Table 3. Descriptives for global impression cues split by the independent variables (complete dataset).
Table 3. Descriptives for global impression cues split by the independent variables (complete dataset).
ContentVeracityCluster MembershipOutcome VariableM (SD)
EventTruth tellers1Clarity5.68 (1.20)
Realism6.05 (0.91)
Plausibility6.16 (0.90)
Immediacy5.84 (1.01)
Detail5.21 (1.62)
Vagueness2.53 (1.68)
2Clarity5.76 (0.94)
Realism6.00 (0.71)
Plausibility6.00 (0.71)
Immediacy5.90 (0.89)
Detail5.38 (1.20)
Vagueness2.05 (1.07)
Liars1Clarity6.06 (0.90)
Realism6.06 (0.90)
Plausibility6.12 (0.86)
Immediacy6.18 (0.81)
Detail5.82 (1.07)
Vagueness1.82 (0.88)
2Clarity5.70 (1.02)
Realism5.74 (0.96)
Plausibility5.70 (0.97)
Immediacy5.74 (0.86)
Detail5.22 (1.41)
Vagueness2.22 (1.38)
OpinionTruth tellers1Clarity5.95 (1.18)
Realism5.89 (1.37)
Plausibility5.89 (1.38)
Immediacy5.89 (1.37)
Detail5.74 (1.33)
Vagueness1.68 (1.42)
2Clarity5.57 (1.25)
Realism5.67 (1.24)
Plausibility5.76 (1.22)
Immediacy5.62 (1.28)
Detail5.24 (1.34)
Vagueness1.57 (0.81)
Liars1Clarity5.71 (0.69)
Realism5.65 (0.70)
Plausibility5.65 (0.70)
Immediacy5.65 (0.70)
Detail5.29 (0.85)
Vagueness2.00 (1.22)
2Clarity5.65 (0.98)
Realism5.57 (1.16)
Plausibility5.52 (1.16)
Immediacy5.48 (1.08)
Detail5.57 (0.99)
Vagueness1.78 (1.09)
Table 4. Estimated marginal means and standard errors for specific details according to statement content (outliers excluded).
Table 4. Estimated marginal means and standard errors for specific details according to statement content (outliers excluded).
Est. Marginal Means (SE)EventOpinion
Sensory5.71 (0.19)3.95 (0.25)
Spatial3.46 (0.24)1.00 (0.00)
Temporal3.15 (0.22)1.00 (0.00)
Affective4.61 (0.19)3.91 (0.15)
Verifiable1.96 (0.17)1.00 (0.00)
Complications1.45 (0.12)2.65 (0.17)
Self-handicapping2.58 (0.23)1.40 (0.13)
Common knowledge1.54 (0.16)3.56 (0.17)
Table 5. Descriptives for specific details split by the independent variables (outliers excluded).
Table 5. Descriptives for specific details split by the independent variables (outliers excluded).
ContentVeracityCluster MembershipOutcome VariableM (SD)
EventTruth tellers1Sensory5.00 (1.70)
Spatial2.30 (1.16)
Temporal2.70 (1.49)
Affective4.40 (1.36)
Verifiable2.10 (1.37)
Complications1.10 (0.32)
Self-handicapping2.30 (1.06)
Common knowledge1.20 (0.63)
2Sensory5.92 (1.12)
Spatial3.77 (1.96)
Temporal3.38 (1.98)
Affective5.62 (1.19)
Verifiable2.15 (1.28)
Complications1.46 (0.78)
Self-handicapping2.31 (1.49)
Common knowledge1.69 (1.32)
Liars1Sensory6.00 (1.12)
Spatial3.78 (1.20)
Temporal3.11 (0.60)
Affective4.33 (1.50)
Verifiable2.00 (0.87)
Complications2.00 (1.22)
Self-handicapping3.22 (1.56)
Common knowledge1.33 (1.00)
2Sensory5.92 (1.00)
Spatial4.00 (1.54)
Temporal3.42 (1.08)
Affective4.08 (1.16)
Verifiable1.58 (0.79)
Complications1.25 (0.62)
Self-handicapping2.50 (1.73)
Common knowledge1.92 (1.08)
OpinionTruth tellers1Sensory4.20 (1.55)
Spatial1.00 (0.00)
Temporal1.00 (0.00)
Affective4.10 (0.99)
Verifiable1.00 (0.00)
Complications2.60 (0.97)
Self-handicapping1.40 (0.70)
Common knowledge3.50 (1.51)
2Sensory3.92 (1.93)
Spatial1.00 (0.00)
Temporal1.00 (0.00)
Affective3.85 (0.99)
Verifiable1.00 (0.00)
Complications2.77 (1.09)
Self-handicapping1.54 (0.88)
Common knowledge3.92 (0.86)
Liars1Sensory3.78 (1.86)
Spatial1.00 (0.00)
Temporal1.00 (0.00)
Affective3.44 (0.88)
Verifiable1.00 (0.00)
Complications2.22 (1.30)
Self-handicapping1.33 (1.00)
Common knowledge3.22 (0.97)
2Sensory3.92 (1.08)
Spatial1.00 (0.00)
Temporal1.00 (0.00)
Affective4.25 (1.06)
Verifiable1.00 (0.00)
Complications3.00 (1.13)
Self-handicapping1.33 (0.78)
Common knowledge3.58 (1.08)
Table 6. Descriptives for global impression cues split by the independent variables (outliers excluded).
Table 6. Descriptives for global impression cues split by the independent variables (outliers excluded).
ContentVeracityCluster MembershipOutcome VariableM (SD)
EventTruth tellers1Clarity5.20 (1.23)
Realism5.80 (0.79)
Plausibility6.00 (0.82)
Immediacy5.50 (0.97)
Detail4.60 (1.84)
Vagueness3.20 (1.99)
2Clarity5.54 (0.97)
Realism5.77 (0.73)
Plausibility5.77 (0.73)
Immediacy5.62 (0.96)
Detail5.00 (1.22)
Vagueness2.46 (1.13)
Liars1Clarity5.89 (0.93)
Realism5.89 (0.93)
Plausibility6.00 (0.87)
Immediacy6.11 (0.78)
Detail5.67 (1.12)
Vagueness2.11 (0.93)
2Clarity5.58 (1.08)
Realism5.75 (0.75)
Plausibility5.67 (0.78)
Immediacy5.67 (0.78)
Detail5.08 (1.38)
Vagueness2.33 (1.50)
OpinionTruth tellers1Clarity5.90 (0.74)
Realism5.90 (0.74)
Plausibility5.90 (0.74)
Immediacy5.90 (0.74)
Detail5.60 (0.52)
Vagueness1.70 (0.67)
2Clarity5.62 (0.65)
Realism5.77 (0.60)
Plausibility5.85 (0.55)
Immediacy5.62 (0.77)
Detail5.08 (0.86)
Vagueness1.85 (0.90)
Liars1Clarity5.56 (0.53)
Realism5.44 (0.53)
Plausibility5.44 (0.53)
Immediacy5.44 (0.53)
Detail5.22 (0.44)
Vagueness1.78 (0.44)
2Clarity5.67 (0.49)
Realism5.58 (0.51)
Plausibility5.58 (0.51)
Immediacy5.42 (0.67)
Detail5.50 (0.52)
Vagueness1.75 (0.62)
Table 7. Descriptives for the variable “micro-expressions” split by the independent variables.
Table 7. Descriptives for the variable “micro-expressions” split by the independent variables.
ContentVeracityCluster MembershipM (SD)
EventTruth tellers11.00 (0.00)
21.10 (0.30)
Liars11.18 (0.39)
21.17 (0.39)
OpinionsTruth tellers11.26 (0.56)
21.14 (0.36)
Liars11.12 (0.33)
21.04 (0.21)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Palena, N.; De Napoli, F. Beware, Not Everyone Lies the Same Way! Investigating the Effects of Interviewees’ Profiles and Lie Content on Verbal Cues. Soc. Sci. 2024, 13, 85. https://doi.org/10.3390/socsci13020085

AMA Style

Palena N, De Napoli F. Beware, Not Everyone Lies the Same Way! Investigating the Effects of Interviewees’ Profiles and Lie Content on Verbal Cues. Social Sciences. 2024; 13(2):85. https://doi.org/10.3390/socsci13020085

Chicago/Turabian Style

Palena, Nicola, and Francesca De Napoli. 2024. "Beware, Not Everyone Lies the Same Way! Investigating the Effects of Interviewees’ Profiles and Lie Content on Verbal Cues" Social Sciences 13, no. 2: 85. https://doi.org/10.3390/socsci13020085

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop