Next Article in Journal
Toward a New Approach to Investigate the Role of Working Memory in Stereotype Threat Effects
Next Article in Special Issue
Effects of Mood on Psychophysiological Detection of Concealed Information and the Relation to Self-Assessed Lying Ability
Previous Article in Journal
Negative Impacts of Sleep–Wake Rhythm Disturbances on Attention in Young Adults
Previous Article in Special Issue
The Model Sketch for Enhancing Lie Detection and Eliciting Information
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Verbal Lie Detection: Its Past, Present and Future

Department of Psychology, University of Portsmouth, Portsmouth PO1 2DY, UK
Department of Psychology, University of Gothenburg, 405 30 Gothenburg, Sweden
Department of Criminology, Bar Ilan University, Ramat Gan 5290002, Israel
Department of Criminology, Ashkelon Academic College, Ashkelon 78211, Israel
School of Psychology, Brain Research and Imaging Centre, University of Plymouth, Plymouth PL4 8AA, UK
Department of Psychology, Florida International University, Miami, FL 33199, USA
Author to whom correspondence should be addressed.
Brain Sci. 2022, 12(12), 1644;
Submission received: 1 November 2022 / Revised: 22 November 2022 / Accepted: 27 November 2022 / Published: 1 December 2022
(This article belongs to the Special Issue Cognitive Approaches to Deception Research)


This article provides an overview of verbal lie detection research. This type of research began in the 1970s with examining the relationship between deception and specific words. We briefly review this initial research. In the late 1980s, Criteria-Based Content Analysis (CBCA) emerged, a veracity assessment tool containing a list of verbal criteria. This was followed by Reality Monitoring (RM) and Scientific Content Analysis (SCAN), two other veracity assessment tools that contain lists of verbal criteria. We discuss their contents, theoretical rationales, and ability to identify truths and lies. We also discuss similarities and differences between CBCA, RM, and SCAN. In the mid 2000s, ‘Interviewing to deception’ emerged, with the goal of developing specific interview protocols aimed at enhancing or eliciting verbal veracity cues. We outline the four most widely researched interview protocols to date: the Strategic Use of Evidence (SUE), Verifiability Approach (VA), Cognitive Credibility Assessment (CCA), and Reality Interviewing (RI). We briefly discuss the working of these protocols, their theoretical rationales and empirical support, as well as the similarities and differences between them. We conclude this article with elaborating on how neuroscientists can inform and improve verbal lie detection.

1. Introduction

The four most common ways to detect deception is via observing nonverbal behaviour, analysing speech content, measuring physiological responses, or measuring brain activity. In terms of research, physiological lie detection came first in the 1930s [1], followed by nonverbal lie detection in the 1950s [2], verbal lie detection in the 1970s [3], and brain activity lie detection in the 1980s [4,5,6].
This article focusses on verbal lie detection, which we think is particularly valuable. Lie detection through measuring physiological reactions or brain activity typically requires hooking up interviewees to machines to measure such responses, making physiological reactions relatively intrusive and difficult to use. When considering verbal and nonverbal lie detection, only verbal lie detection may lead to conclusive evidence that can be used in courts (e.g., contradicting facts), and investigators elicit longer statements from interviewees when applying verbal compared with nonverbal lie detection [7] because longer statements are required for verbal lie detection [8]. This eliciting-information aspect of verbal lie detection is beneficial because eliciting information is an important aim of interviewing [9].

2. Initial Verbal Lie Detection Research

Initial verbal lie detection research started in the late 1970s [3,10] and focused on individual words such as (i) self-references (‘I’ or ‘We’), (ii) negative emotions (‘hate’, ‘worried’, ‘nervous’ or ‘sad’), (iii) generalising terms (‘always’, ‘never’), (iv) lexical diversity (percentage of distinct words divided by total number of words), and (v) exclusive words (‘without’, ‘except’ or ‘but’) [11,12]. This type of lie detection is still popular because such words can be detected automatically with computer programs [11]. A meta-analysis [11] showed that lie tellers made more generalising terms than truth tellers (gu’s = −0.37) but this was based on only four studies, which means it should be interpreted with caution. Exclusive words were also related to deception and examined enough times (n = 20) to obtain reliable results: lie tellers used more exclusive words than truth tellers (gu = 0.31). The other three cues (self-references, negative emotions, and lexical diversity) were examined at least 22 times but none of the cues were related to deception (gu’s ranged from −0.18 to 0.14). In other words, verbal lie detection research examining individual words revealed some veracity indicators.

3. From the 1980s Onwards: Clusters of Verbal Cues

From the 1950s onwards, scholars from Sweden [13] and Germany [14,15] examined verbal criteria thought to be related to the credibility of child witnesses’ testimonies in trials for sexual offences. It is often difficult to determine the facts in an alleged sexual abuse case. Frequently, the alleged victim and the defendant give contradictory testimony and there is no independent evidence available to determine an objective version of events. This makes the perceived credibility of the alleged victim and defendant important. The alleged victim is in a disadvantageous position if s/he is a child, because adults tend to mistrust children’s statements [16].
The German scholars Günter Köhnken and Max Steller refined the available criteria and integrated them into a formal list of 19 verbal criteria, which they published in English in the late 1980s [17,18]. In a procedure called Criteria-Based Content Analysis (CBCA), experts assess the presence of each of the 19 criteria in a written statement. The 19 CBCA criteria are all cues to truthfulness and are thought to be more frequently present in truthful than deceptive statements [19]. CBCA is part of a wider veracity assessment procedure called Statement Validity Assessment (SVA) that consists of four stages: (i) a case-file analysis to gain insight into the case; (ii) a semi-structured interview to obtain a statement from the interviewee; (iii) CBCA that systematically assesses the quality of a statement; and (iv) an evaluation of the CBCA outcome via a set of questions (Validity Checklist) [20,21,22].
The first 13 CBCA criteria are cognitive criteria. According to the CBCA rationale, these criteria are more likely to occur in truthful than in deceptive statements because lie tellers find it typically too difficult to fabricate them [23]. The cognitive criteria include reporting information in non-chronological order (Criterion 2, unstructured production); the total amount of information reported (Criterion 3, total details); mentioning where (location) and when (time) events took place (Criterion 4, contextual embeddings); and reporting experiences that were unexpected (Criterion 7, unexpected complications). Lie tellers are less likely than truth tellers to report the remaining criteria for motivational reasons: lie tellers believe that including these criteria in their statement make them sound suspicious [23]. An example of such a suspicious-sounding criterion is making corrections without being prompted by the interviewer (Criterion 14, Spontaneous corrections).
Different ways are used to code the presence of CBCA criteria. Sometimes 3-point Likert scales are used (a criterion is never, occasionally, or often present); sometimes 7-point Likert scales are used (a criterion is [1] not at all present to [7] very often present); and sometimes the number of times each criterion occurred (frequency of occurrence) is counted [12]. Whether using different coding schemes affects the diagnostic value of CBCA has, to our knowledge, never been examined.
CBCA has been widely researched. One meta-analysis [24] included 39 studies analysing adult statements and another meta-analysis [25] included 22 studies analysing child statements. Truth tellers obtained higher total CBCA scores (summation of the individual criteria) than lie tellers in adults (d = 0.55) and in children (d = 0.78). The total CBCA scores discriminated truth tellers and lie tellers better than each of the individual criteria in both adults and children. The finding that examining clusters of cues yields better lie detection results than examining individual cues is a typical finding in deception research [26,27]. In both adults’ and children’s statements, the cognitive criteria were better veracity indicators (average d-scores were d = 0.29 in adults and d = 0.42 in children) than the motivational criteria (average d-scores were d = 0.13 in adults and d = 0.19 in children).
CBCA has become a game-changer in verbal lie detection in several ways. First, CBCA experts examine a cluster of cues rather than the earlier practice of examining individual cues. Second, CBCA is a standardised veracity assessment method unlike the unstandardised verbal (or nonverbal) lie detection methods that were used before (After the introduction of CBCA, a standardised protocol to analyse nonverbal behaviours emerged: the Behavior Analysis Interview (BAI). Although the initial results were promising [28], they have not been replicated since [29,30,31].) Standardisation means that CBCA experts always examine the same 19 verbal cues in every single statement they analyse. In contrast, until the arrival of CBCA, verbal and nonverbal lie detection was carried out randomly—experts did not have fixed lists of cues they examined, nor could they tell beforehand which cue they would attend to. A prime example of this is the leakage hypothesis, popular in nonverbal lie detection [32,33,34]. Lie tellers are supposed to leak nonverbal cues to deception, but what these leaking nonverbal cues are remains undefined. The (unspecified) cues could literally be any behaviour. Experts did not predict beforehand which cues may reveal leakage, and when analysing footage, different experts may highlight different cues.
A second verbal veracity assessment tool based on a cluster of verbal criteria emerged in the early 1990s: Reality Monitoring (RM: [35,36,37]. The RM lie detection tool is based on the work of Marcia Johnson and Carol Raye who published in 1981 their seminal paper about memory characteristics [38] (see also [39,40]). They argued that memories of real experiences differ from memories about imagined events. Memories of real experiences are obtained through perceptual processes. Memories of real experiences are therefore likely to contain sensory information—details of vision, sound, smell, taste, or touch. These experiences took place in space and time and therefore contain contextual information, which consists of spatial details (details about where the event took place but also where objects and people were situated in relation to each other) and temporal details (details about the time order of events and the duration of events). In contrast, memories about imagined events are derived from an internal source and are therefore likely to contain cognitive operations, such as thoughts and reasonings (‘I must have had my coat on, as it was very cold that night’).
In RM deception research, it is assumed that truths are recollections of experienced events whereas lies are recollections of imagined (simulated) events [41,42,43]. RM has also been widely researched and the most recent meta-analysis included 40 studies [44]. Truth tellers obtained a higher RM score (total score of the individual criteria) than lie tellers (d = 0.55). Regarding the individual criteria, their relationship with deception was always weaker than the relationship between deception and the total RM score. These relationships ranged between d = 0.25 for spatial information to d = 0.51 for temporal information.
A final verbal veracity assessment tool that examines a cluster of verbal criteria is Scientific Content Analysis or SCAN [45]. A definite list of the observed criteria does not exist, and different SCAN users use different combinations of about 12 criteria [46,47,48]. They include cues of truthfulness (cues that truth tellers are thought to report more than lie tellers) and cues of deceit (cues that lie tellers are thought to report more than truth tellers). Examples of cues of truthfulness are denials of allegations and the use of pronouns (‘I’, ‘my’, ‘we’, ‘us’). Examples of cues of deceit are text bridges which imply missing information (words such as ‘finally’, ‘later on’, and ‘sometimes after’) and change in language (describing in a single statement a car as ‘the car’, ‘the vehicle’, and ‘the dark coloured car’).
No theoretical rationale is given why truth tellers and lie tellers are expected to differ on the SCAN criteria, and there is not much research that has been carried out with SCAN. The developer, Avinoam Sapir, never reported any formal empirical test of the tool, but SCAN supporters often refer to a study by Driscoll [47] which showed very good results (about 85% total accuracy). This was a field study in which the statements of real criminal suspects were analysed. The problem was that the ground truth was mostly unknown; that is, it was not known which suspects were telling the truth and which suspects were lying [12]. Experimental laboratory research into SCAN in which the ground truth is known has not shown any evidence that SCAN works as a lie detection tool [46,49,50].

4. Comparing CBCA, RM and SCAN

Table 1 summarises the aspects of the CBCA, RM, and SCAN tools that we believe are important when comparing the three tools. All three use a cluster of variables to assess veracity, but only CBCA and RM do this in a standardised way. That is, in evaluating each statement, different CBCA and RM experts use the same list of criteria and use the same procedure in assessing veracity based on these criteria (by creating total CBCA and RM scores). In contrast, SCAN is not standardised. An official list of SCAN criteria does not exist although there seems to be agreement amongst SCAN users about which 12 criteria to use. No standard procedure exists, however, of how to assess veracity based on these 12 criteria. SCAN experts do not create a total score but highlight the criteria they believe are indicative of truth or lie in each statement. The criteria they highlight depend on the statement and on the SCAN expert [51]. Unsurprisingly, when assessing statements, inter-agreement among SCAN experts is typically low [48].
Total CBCA and RM scores appear to be diagnostic veracity indicators. When comparing these two methods—which is only possible in adult statements because of the absence of RM data in child statements—they appear to be equally diagnostic. Furthermore, CBCA and RM have been examined across many labs which contributes to the robustness of the results. The CBCA meta-analysis with child statements included more than 15 different labs, whereas the CBCA meta-analysis with adult statements and the RM meta-analysis each included more than 20 different labs. SCAN does not appear to be a valid veracity assessment tool. Although SCAN is not widely researched, it has been carried out in several labs, predominantly by Bogaard et al. [46,49,52] but also by [47,48,50,53].
Almost all psychology theory and research originated in Western, educated, industrialised, rich, and democratic (WEIRD) nations and lie detection is not an exception. Verbal deception research in non-WEIRD countries is scarce but relevant because cross-cultural differences in verbal cues to deceit exist [54,55,56]. We are not aware of CBCA and RM research carried out in non-WEIRD countries and the CBCA and RM meta-analyses published to date do not make a WEIRD vs non-WEIRD comparison. However, it has been suggested that some CBCA criteria may be sensitive to cultural differences [57].
Practitioners are more likely to use SCAN than either CBCA or RM. Practitioners across the world use this tool, including law enforcement personnel, military agencies, and secret services [50]. RM, to our knowledge, is not used by practitioners but is popular amongst researchers. SVA assessments (of which CBCA is part) are made by legal psychologists in alleged sexual abuse cases in several Western European countries and in Canada and the SVA verdicts are accepted as evidence in criminal courts in those countries [12,58].
SVA assessments are not identical to CBCA assessments [20]. The problem is that CBCA scores are affected by factors other than veracity. To give two examples: CBCA scores are affected by the age of the interviewee (older children provide higher quality statements than younger children and thus obtain higher CBCA scores) [59,60] and by the quality of the interview (a well-conducted interview elicits more information than a poorly conducted interview and thus results in higher CBCA scores) [61,62].
In experimental CBCA research, groups of truth tellers and lie tellers are compared and these groups are similar except for their veracity status. In other words, the external factors that could influence CBCA scores are controlled for. In real life, individual cases are examined, and CBCA experts need to take the impact of external factors into account, such as age and quality of the interview, when making veracity assessments. This is the purpose of the Validity Checklist procedure, the fourth phase of SVA. The Validity Checklist procedure is more subjective and less formalised than the CBCA procedure [18,19]. When examining the use of the Validity Checklist in Sweden, different experts sometimes drew different conclusions about the impact of Validity Checklist factors on children’s statements [63]. If two experts disagree about the veracity of a statement in a German criminal case, the most likely reason for the disagreement is that they have a different view about the impact of the Validity Checklist factors on that statement (Vrij [12] citing personal communication with Köhnken in 1997). In addition, in CBCA research, equal weight is given to each criterion when calculating a total CBCA score, whereas this is often not the case in real life [20]. In other words, although researchers treat CBCA as a quantitative method, this is not the case in real-life practice [20].
Both CBCA and SCAN training courses are multiple-day courses. CBCA is more difficult to learn and to use than SCAN because CBCA is part of the wider SVA procedure [12,58]. To make someone familiar with the entire SVA procedure, [23] recommends a 3-week training course.
Researchers are more likely to examine RM than either CBCA or SCAN. At least two factors contribute to this. First, RM coding is relatively easy to learn and use [12,43]. That is, trainees feel comfortable about how to conduct RM coding after a few hours of training. Second, most research deals with interviewing lie tellers and truth tellers about alleged experiences. Those experiences are derived from sensory details (vision, sound, smell, taste, and touch) and since the experiences happened in time and space, they also include contextual details (time and space). The RM list of criteria includes these five sensory categories and two contextual details categories, which means that all possible details that describe experiences are covered by RM coding. In other words, RM gives a complete picture of what interviewees recall. The CBCA and (generally used) SCAN criteria include the two contextual details criteria, but the other criteria are not categorised based on type of sensory information. Instead, they are related to specific types of content. All sensory information that does not match these content criteria will be omitted in CBCA and SCAN coding. In other words, CBCA and SCAN do not give a complete picture of what an interviewee recalls.
According to the CBCA rationale, lie tellers provide less information than truth tellers because lie tellers are unable to fabricate all sorts of detail (so-labelled cognitive criteria). This inability to fabricate details is nowadays still mentioned by deception researchers. However, researchers nowadays also mention an additional reason: lie tellers are unwilling to report information because they wish to keep their stories simple, as we explain in the next ‘Interviewing to detect deception’ section.
Note that an inability to report details and an unwillingness to do so lead to the same outcome—lie tellers report fewer details than truth tellers. It is difficult to determine what it is causing lie tellers to provide fewer details than truth tellers, whether this is because of the lie tellers’ inability to report information, their unwillingness to do so, or a mixture of the two. Research has revealed that unwillingness to report information on its own can result in lie tellers reporting fewer details than truth tellers. In one experiment, truth tellers and lie tellers were sent on the same mission and were asked to report their experiences in a subsequent interview [64]. Lie tellers were instructed to lie about the reason for carrying out the mission, not about what they did. In the interview, no questions were asked about why the interviewees carried out their mission; they were just asked to tell what they did. Lie tellers could thus report their missions truthfully and did not have to fabricate any details. Yet they reported fewer details than truth tellers. This cannot be explained by the lie tellers’ inability to fabricate details. In another experiment, truth tellers and lie tellers carried out the same mission [65]. Truth tellers were instructed to report all details truthfully; lie tellers were instructed to report most details truthfully, except one aspect of the mission, which they were instructed to omit (omission lie). Lie tellers reported fewer details than truth tellers when comparing the parts of the mission that both truth tellers and lie tellers reported truthfully. This finding also cannot be explained by the lie tellers’ inability to fabricate details but there was evidence that it was associated with the lie tellers’ unwillingness to report information. That is, lie tellers more than truth tellers reported keeping their stories simple and this strategy was negatively associated with the number of details provided.
The RM rationale is that truth tellers report more sensory and contextual details than lie tellers because truth tellers have experienced the event, whereas lie tellers have only imagined the event. This seems to refer (implicitly) to the lie tellers’ inability to report information—they have not experienced sensory and contextual details, so they do not report them—rather than to their unwillingness to do so. However, instead of fabricating a story (result of imagination), lie tellers more often tell embedded lies in which large parts are true [66]. Lie tellers also report fewer sensory and contextual details than truth tellers in embedded lies, which is difficult to explain with the RM rationale. We believe that the merit of RM lies in the quality of detail it examines (sensory and contextual details) as researchers still widely examine such details to date.
We do not know what the rationale is behind SCAN because SCAN users have never provided such a rationale. Since the SCAN instrument does not seem to differentiate truth tellers from lie tellers, we do not find it necessary to speculate on the possible rationale.
The RM and SCAN tools do not include interview protocols. SCAN takes this to the extreme. Interviewees are instructed to write down their activities during a specific period in as much detail as possible. Interviewers are not present when the interviewees write down their experiences, so that they cannot influence the statement. The SCAN intentions are good because poor interviewing can impair the ability to detect deceit [7,8,67]. However, research has shown that verbal differences between truth tellers and lie tellers in unprompted recalls are faint and unreliable [68] and that the opportunity to distinguish between truth tellers and lie tellers in unprompted recalls is just above chance level [69]. The limitation of the SCAN procedure is that it ignores that truth tellers do not recall all information they know in an unprompted recall [70]. In addition, practitioners find obtaining written statements problematic because written statements are not common in police interviews [51]. CBCA is part of SVA which includes an interview protocol (Phase 2). However, this interview protocol is not tailored to lie detection. It advises to interview according to good practice interview principles but does not discuss which specific questions or instructions are required to enhance the ability to detect deceit.
Inspired by [68] meta-analysis showing that verbal cues to deceit in unprompted recalls are faint and unreliable, verbal lie detection researchers started to examine whether verbal cues could be elicited or whether already existing verbal differences could be further enhanced using specific interview protocols. Four interview protocols have emerged since then and we will discuss them in the next section. They all assume that specific questions or specific instructions will lead to different verbal responses by truth tellers and lie tellers, which subsequently facilitates lie detection. The idea that the interview protocol matters in truth/lie detection is not new. It has been acknowledged in physiological lie detection from the moment it emerged in the 1930s. The debate in that research domain is heated [12,71]. It is not about the ability of the polygraph to distinguish between truth tellers and lie tellers because all researchers involved in the debate are using the same machine and examine the same physiological responses. The debate is about which interview protocol to use to obtain reliable differences between truth tellers and lie tellers [72,73]. A heated debate amongst verbal deception researchers like the one in the polygraph world is absent because the verbal interview protocols are not in competition with each other. Which protocol to use depends on the situation as we explain in the next section.

5. Interviewing to Detect Deception

Verbal lie detection made the next step in its development in the mid-2000s when the first ‘interviewing to detect deception’ articles emerged [74,75]. Scholars started to design interview techniques aimed at enhancing or eliciting cues to truthfulness or deceit. The starting point of this line of research was that lie tellers and truth tellers enter interviews with different mental states [76]. Lie tellers have unique knowledge, which, if recognised by the interviewer, makes it obvious that they are lying. Lie tellers’ main concern is to ensure that the interviewer does not gain that knowledge. Innocent suspects face the opposite problem, fearing that the interviewer will not learn or believe what they know. These different mental states result in different strategies used by lie tellers and truth tellers in interviews to sound convincing [77,78,79].
One popular strategy used by lie tellers is to avoid providing self-incriminating information [76]. The reason for this strategy is obvious: providing incriminating information will give the lie away. A second popular strategy amongst lie tellers is to keep their stories simple [77]. They do this for several reasons. First, the more information they provide, the more likely it will be that they will provide leads to investigators that gives their lies away [76,80]. Second, lie tellers may expect to be interviewed repeatedly about an event in which case they should not contradict what they have said before. Keeping their initial story simple reduces the likelihood of contradictions occurring in subsequent interviews. Third, although truth tellers believe that their innocence shines through [81,82], lie tellers often take their credibility less for granted than truth tellers [68]. Therefore, lie tellers are more inclined than truth tellers to engage themselves in tasks other than storytelling, such as controlling their demeanour so that they appear honest to the interviewer [83] and monitoring the interviewer’s reactions to assess whether they appear to be convincing [84]. Keeping stories simple is one way to cope with the additional cognitive demand of multi-tasking.
Truth tellers are inclined to be cooperative and forthcoming [76,77]. However, truth tellers never spontaneously report all information in the first instance [70]. They may have the wrong expectations about how much information they should report, they may lack the motivation to report all the information they can remember, or they may find it difficult to recall all relevant information stored in their memory.
Four interview protocols have been developed to date aimed at exploiting the different strategies truth tellers and lie tellers use: Cognitive Credibility Assessment [85,86,87], Reality Interviewing (RI, [88,89]), Strategic Use of Evidence (SUE, [90,91,92]), and the Verifiability Approach (VA, [93,94,95]). SUE and VA are evidence-based interview protocols: interviewers should possess independent evidence (SUE) or truth tellers must be able to report such independent evidence during the interview (VA) to use them. They focus on the ‘avoiding reporting incriminating evidence’ strategy lie tellers employ and are designed to exploit differences between lie tellers and truth tellers in being evasive or forthcoming in reporting critical evidence. They compare the statements with the (potentially) available evidence (e.g., statement – evidence inconsistency). CCA and RI are statement-quality-based interview protocols: They do not require independent evidence to be available but focus on the quality of the statement. They focus on lie tellers’ ‘keeping-stories-simple’ strategy and are designed to exploit differences between lie tellers and truth tellers in being reluctant or forthcoming when reporting information in general. They focus on the richness of details (e.g., complications).

5.1. Strategic Use of Evidence

According to the SUE rationale, lie tellers’ strategy to avoiding incriminating evidence results in using avoidance strategies (e.g., in a free recall to avoid mentioning that they were at a certain place at a certain time) or denial strategies (e.g., denying having been at a certain place at a certain time when directly asked) [76,90,96]. When investigators possess critical and possibly incriminating background information (independent evidence), they can exploit lie tellers’ avoidance and denial strategies by introducing the available evidence during the interview in a strategic manner. According to the Evidence Framing Matrix (EFM), the pieces of evidence vary in terms of strength of the source (from weak to strong) and degree of precision (from low to high). Reference [91] found that revealing the evidence in a stepwise manner moving from the most indirect form of framing (weak source/low specificity, e.g., ‘We have information telling us that you recently visited the central station’) to the most direct form of framing (strong source/high specificity, e.g., ‘We have CCTV footage showing that you collected a package from a deposit box at the central station, ground floor level, on the 24th of August at 7.30 p.m.’) to be most successful in obtaining veracity differences. When questions are asked in the most indirect form of framing, the answers of the avoidant/denying lie tellers will be less consistent with the available evidence (e.g., admitting visiting the central station but no admission yet about when and about collecting a package from the deposit box) than the answers of the forthcoming truth tellers (statement-evidence inconsistency [92]). In addition, when more direct forms of framing questions are asked, lie tellers may start to realise that the interviewer may hold incriminating evidence against them. Lie tellers are then inclined to change their statement and try to provide an innocent explanation for this evidence. As a result, lie tellers will show less overlap between different parts of a statement than truth tellers (within-statements inconsistency). See [96] for a recent overview of the SUE technique.
The SUE technique has been widely researched with over 20 publications known to us. A meta-analysis of eight SUE studies showed a very large veracity effect in statement—evidence consistency (d = 1.06), the only verbal cue examined in that meta-analysis [92].

5.2. Verifiability Approach

The idea behind the Verifiability Approach [94,95] is that lie tellers have a dilemma to solve. The subjective belief amongst people is that deceptive statements lack detail [12]. Indeed, the richer an account is perceived to be in detail, the more likely it is to be believed [97]. Lie tellers are therefore motivated to provide many details in their attempts to come across as sincere. However, at the same time, lie tellers prefer to avoid mentioning too many details out of fear that investigators will check such details, which could subsequently give the lie away [98]. A strategy that incorporates both goals is to provide many details, but to avoid reporting details that can be verified. Details that can be verified are activities: (i) carried out with or (ii) witnessed by named persons; (iii) recorded on CCTV or photos; and (iv) that leave digital traces (e.g., phone calls) or physical traces (receipts).
VA has been frequently examined. A meta-analysis, including 14 studies, revealed that lie tellers indeed typically report fewer details that can be checked than truth tellers, g = 0.42 [93]. This effect becomes stronger when an Information Protocol is used: asking interviewees, where possible, to include details in their statement that the investigator can check. Truth tellers, more than lie tellers, report checkable details following such a request [93]. When an Information Protocol is employed, the difference between truth tellers and lie tellers in reporting verifiable details is large, g = 0.80.
Recently, Nisin and colleagues introduced the Context Embedded Perception (CEP) approach [99]. The core of CEP is that activities always take place at specific locations and at specific times. If lie tellers wish to avoid providing information that can be verified, they should avoid providing such contextual details. Initial research testing this CEP approach showed that the proportion of contextual details (contextual details / [contextual details + sensory details]) was indeed higher amongst truth tellers than amongst lie tellers [99].

5.3. Cognitive Credibility Assessment

One component of CCA is to help truth tellers to report as much information as possible by using specific interventions that facilitate reporting. Such so-called ‘encouraging interviewees to say more’ interventions should have a larger effect on truth tellers than on lie tellers because the truth tellers’ strategy to be forthcoming should result in more information than the lie tellers’ strategy to keep their stories simple. CCA encourages truth tellers to say more by addressing all three reasons mentioned earlier as to why truth tellers do not initially report all information they know, namely wrong expectations, lack of motivation, and inability to recall.
Raising expectations about how much information someone is expected to report can be achieved through a Model Statement, which is an example of a detailed account of an event [100]. Researchers typically present it in an audiotaped format, but other formats are equally effective [101]. Exposing interviewees to a Model Statement leads to more information than an instruction to ‘provide as many details as possible’, probably because the Model Statement is a concrete example, which is easier to follow than (abstract) instructions [102].
Raising motivation to provide information can be achieved through a supportive second interviewer; someone who nods and smiles during an interview. A supportive interviewer makes the interview setting more comfortable, which facilitates truth teller’s reporting [103].
Facilitating recall of (visual) experiences can be achieved by asking interviewees to sketch whilst they narrate [104,105]. Sketching is a visual output, which matches visual memory better than a verbal output. Sketching also slows down the thinking process which gives interviewees more time to think about the event.
Apart from the ‘encouraging interviewees to say more’ component, CCA has two more components: ‘imposing cognitive load’ and ‘asking unexpected questions’. The rationale behind imposing cognitive load is that research has shown that during interviews lie tellers experience more cognitive load than truth tellers [106,107,108]. Investigators can exploit this difference by making the interview setting more difficult. This should impair lie tellers more than truth tellers because the lie tellers’ cognitive resources are more depleted. Recent research supports this hypothesis [109]. Participants discussed their true opinions or false opinions about a societal issue (e.g., abortion, capital punishment, cannabis for personal use). In addition to this, half of the participants had to remember a car license plate at the same time. Larger speech differences between truth tellers and lie tellers occurred in this ‘secondary task’ condition than when they were just asked to discuss their opinion.
Lie tellers prepare themselves for interviews by thinking about answers they could give to possible questions. The weakness in their planning is that they can never know which questions will be asked. Investigators could exploit this by asking a mixture of expected and unexpected questions. For truth tellers, it makes no difference in answering those two types of question. In contrast, since lie tellers can give their prepared answers to the expected questions but need to fabricate an answer to the unexpected questions, for them the unexpected questions are considerably more difficult to answer. Examples of unexpected questions are spatial questions [110] and questions about the planning of activities [111,112].
Recently, several CCA techniques and an adaptation of VA were combined into one interview protocol consisting of five phases [87]. The assumption was that truth tellers will report more additional details during each phase than lie tellers, due to truth tellers’ inclination to be forthcoming and lie tellers’ inclination to keep their stories simple. An initial free recall was followed by a Model Statement as an example for a second free recall. Then, a third free recall took place in Reverse Order (to impose cognitive load) followed by a fourth free recall in which interviewees spoke and sketched at the same time. In a final free recall, interviewees were asked to include, where possible, sources the investigator could check (named witnesses, CCTV footage, receipts etc.). Truth tellers reported more specific new details (complications and verifiable sources) during the subsequent phases than lie tellers. A complication is information that affects the storyteller and makes the recall more complicated (‘Initially we did not see each other because we were waiting at different entrances’). This criterion is an adaptation of CBCA-criterion 7: unexpected complications.
CCA has been widely researched to date. A meta-analysis of CCA research including 25 studies showed that using CCA techniques resulted in a moderate increase in differentiating between truth tellers and lie tellers (d = 0.42) compared with a standard condition [85]. A second meta-analysis included 23 studies in which observers attempted to detect deceit [113]. It revealed that observers were more accurate at distinguishing between truth tellers and lie tellers when a CCA technique was employed in the interview (60% accuracy) than when it was not employed (48% accuracy). CCA particularly increased accuracy rates when observers were knowledgeable about the verbal cues they should attend to. That is, knowledgeable observers obtained much higher accuracy (76%) than naïve observers (52%) [113].

5.4. Reality Interviewing

Reality Interviewing [88,89] is a standardised interview protocol consisting of five phases. It has the same assumption as CCA: truth tellers will report more additional details during each phase than lie tellers, due to their inclination to be forthcoming and lie tellers’ inclination to keep stories simple. Stage 1 (initial free recall stage) is an initial request to interviewees to report all they can remember about the alleged event. The initial free recall is followed by mnemonics, which are interview techniques that facilitate enhanced memory recall [114]. These mnemonics are derived from the Cognitive Interview [115,116,117], an interview protocol designed to elicit more information from cooperative witnesses. In Stage 2 (mental reinstatement of the context stage), participants are asked to report the event again, but this time to think about and include all associated details such as sights, sounds, smells, emotions, thoughts at the time, or anything else they remember from the time of the event. In Stage 3 (other perspective stage), participants are asked to recall the alleged event from another perspective: ‘If someone else had been present, what would they have seen?’ In Stage 4 (reverse order stage), participants are asked to report the event in reversed chronological order. In Stage 5 (final free recall stage), participants are asked for a final time to describe the alleged event in as much detail as possible.
The Reality Interview also contains nine multiple choice questions (each with two alternatives) which are embedded in between the interview stages. For example, ‘If a police officer had been present, would they have noticed something wrong?’ These multiple-choice questions are included to force the participant to think deeper about the event. Answering ‘No’ would make it less likely that a follow-up question will be asked and thus be in alignment with lie tellers’ strategy to keep it simple.
Of the four interview protocols discussed in this section, RI is the least examined. All RI research examines the ability to detect truths and lies via the RI protocol as a whole without examining the diagnostic value of each of the five phases. However, we know from deception research that the Reverse Order instruction can facilitate lie detection [75,118]. There is no meta-analysis yet available examining the efficacy of RI. The results should be interpreted with some caution but the available studies to date showed that around 75% of truth tellers and lie tellers were correctly classified with the tool [119].

6. Comparing SUE, VA, CCA and RI

Table 2 summarises aspects of SUE, VA, CCA, and RI that we believe are important when comparing the four interview protocols. All protocols except SUE use the RM-list of criteria (sensory and contextual details) as dependent variables. In VA, a distinction is made between details that are verifiable and details that are not. More recently, VA examines the proportion of contextual details in a statement (contextual details / [contextual details + sensory details). CCA measures, apart from sensory and contextual details, complications (adapted from CBCA) and verifiable sources (adapted from VA). SUE is the only protocol that does not use RM coding, as it purely focuses on inconsistencies (within a statement or between the statement and available evidence).
All interview protocols are standardised in terms of procedure. An initial unprompted free recall is followed by specific questions and instructions. These questions and instructions are always the same in VA, CCA, and RI interviews but differ in SUE because they depend on the available evidence. However, the way to ask questions related to the evidence and how to strategically disclose the evidence is standardised in SUE (from weak source/low specificity questions to strong source/high specificity questions [91,96], like the standardised procedure of adaptive testing [120].
All four interview protocols received empirical support that they can successfully discriminate truth tellers from lie tellers, and only the support for RI is not based on a meta-analysis. Research into the four protocols is concentrated in a few labs (SUE: Granhag and Hartwig; VA: Nahari and Vrij; CCA: Vrij; and RI: Colwell). This is a limitation because we could have more faith in a verbal lie detection tool if researchers working in independent labs show that the tool works.
It is difficult to compare the diagnostic value of the protocols but comparing the effect sizes may give us an estimate. Since effect sizes for RI do not exist, we only compare SUE, VA, and CCA. The results show that support for SUE is the strongest, and for CCA, the weakest. There appears to be a positive relationship between independent evidence available (available in SUE vs. potentially available in VA vs. not available in CCA) and diagnostic value. Someone could argue that, if the correct interview protocol is used, lie detection is somewhat easier when a statement can be compared with evidence than when this is not possible. However, another explanation for the superior effect sizes for SUE compared with CCA is possible. SUE experiments are specifically designed—the interviewer possesses evidence that can be used in a SUE interview. CCA are typically not specifically designed for CCA assessments. That is, interviewees report self-generated stories about an alleged trip they made or about a memorable event [121]. If these events only contained a few complications, CCA assessments are unlikely to reveal substantial veracity effects. If researchers would construct events that include many complications truth tellers could report, CCA assessments could become more diagnostic.
Cross-cultural research testing the efficacy of the four interview protocols is scarce and we know only one such experiment. In [122], researchers tested the efficacy of the Model Statement technique in Russian and South Korean participants and found that it was effective in eliciting complications as a cue to truthfulness. The same finding was also obtained in UK participants [102], suggesting that the Model Statement technique works cross-culturally.
All four interview protocols are used by practitioners, including police and intelligence services, but its use is limited. There are at least two reasons for this. First, there is not much opportunity to learn the protocols because commercial training programs are not available. Second, for these protocols to work, interviewers should ask just a few open-ended questions, and provide many opportunities for interviewees to talk. This is not how interviewers typically conduct interviews. Interviewers often interrupt interviewees [123,124] and ask many questions to which interviewees only give short answers [125,126]. The discrepancy between a typically conducted interview and an interview suited for using the four interview protocols makes it difficult to teach the protocols to practitioners. Not only do they need to learn how to use the four protocols, they also need to abandon their old habit of asking so many questions. Third, some of the instructions may limit its use. For example, in the United Kingdom, defence lawyers will object against the Other Perspective Stage because it invites suspects to discuss something hypothetical (what would another person have experienced).
The underlying rationale of all four interview protocols Is that truth tellers are willing to be forthcoming. They also assume that truth tellers do not report all information they know spontaneously, and that specific questions or instructions are needed to achieve this. CCA and RI put more effort than SUE and VA into encouraging truth tellers to report all they know by using memory-enhancement techniques (e.g., sketching, context reinstatement). CCA goes one step further compared with RI. CCA also includes instructions to: (i) raise expectations amongst truth tellers about how much information they are expected to report (e.g., Model Statement) and (ii) motivate truth tellers to report more information (e.g., supportive interviewer). This suggests that CCA elicits more information than RI, which was indeed found in the only experiment to date in which the two protocols were compared [65].
The underlying rationale of SUE and VA is that lie tellers strategically avoid reporting incriminating information. Since evidence is available to the interviewer in SUE or potentially available in VA (e.g., truth tellers may be able to report evidence), these interview protocols are designed to exploit differences between truth tellers and lie tellers in being forthcoming or evasive in reporting critical evidence. The underlying rationale of CCA and RI is that lie tellers prefer to keep their stories simple. Since evidence is unavailable when CCA and RI are used, these protocols are designed to exploit differences between truth tellers and lie tellers in being forthcoming or reluctant when reporting information in general.
Which interview protocol to use depends on the available evidence. If interviewers possess independence evidence about the event under investigation (e.g., witnesses, CCTV footage, credit card receipts), SUE may be the preferred method. If evidence is potentially available (truth tellers will be able to provide evidence when asked to do so), VA may be the preferred method. If interviewers do not possess evidence and truth tellers are not able to provide such evidence, CCA and RI are the preferred methods. In police interviews, investigators often possess evidence. In other scenarios, the situation could be different. In situations where evidence is not available or cannot be revealed to the suspect, investigators can ask interviewees to provide such evidence. However, investigators can be reluctant to ask interviewees to provide evidence because interviewees could interpret this as an indication that the evidence against them is weak. If interviewees think that the evidence against them is weak, they may decide to ‘shut down’ [127,128] which will impair the interviewers’ ability to gather information and to detect deceit. It is also possible that truth tellers cannot provide verifiable information about key elements of their activities. For example, a source sent on a mission could be able to provide verifiable evidence about the location he went to but may be unable to demonstrate what he did there (key element) when CCTV cameras and witnesses are unavailable.
Note that the recommendation for when to use which tool could be more complex than suggested in the previous paragraph. For example, if, on the one hand, the interviewer has little and particularly weak evidence regarding the event, and on the other hand, it is an event that included a lot of complications, it could be recommended to use CCA rather than SUE.

7. The Future of Verbal Lie Detection

In recent years we have suggested several venues for future verbal lie detection research [129,130]. This included researching verbal lie detection: (i) in cross-cultural settings, (ii) in different deception scenarios (vetting scenarios, lying about intentions, lying through omitting information), and (iii) when using interpreters. We recommended searching for new verbal cues, particularly those that indicate deceit (lie tellers report the cue more than truth tellers) rather than truthfulness (truth tellers report the cue more than lie tellers). We also suggested research examining the efficacy of countermeasures (can lie tellers sound like truth tellers when they are aware of how the interview protocols work?), and suggested research aimed at further developing interview protocols that use within-subjects comparisons (comparing different responses made by the same interviewee in a single interview). For more information about these ideas, see [129,130].
In this section, we will outline some ideas about how insight into truth tellers’ and lie tellers’ brain activity could help verbal lie detection. To date, neuroscience-based research on verbal lie detection has focused on measuring brain activity to simple responses given by individuals who have been asked single questions [131]. This is a much more basic interview protocol than used in real life where often a single verbal response could raise suspicion in an interviewer which then could lead to follow-up questions to home in onto this potential lie. This more interactive interviewing approach is almost impossible with neuroimaging, in part because the signal-to-noise ratio of ongoing brain signals for complex cognitive processes such as lying, especially in interactive settings, is much smaller than that of overt signals such as verbal responses. That is, the neural signatures associated with a specific verbal response would be buried in neural noise produced by everything else that occurs in the interviewee’s brain during that event and could not be used to guide the formulation of further questions. Reducing such noise would require combining information from several tens of similar events over extended periods of time. In addition, there is the practical problem of having to get access to and to use MRI machines each time to conduct an interview. However, there are more ways in which the earlier discussed ideas about interviewing to detect deception could benefit from neuroscience techniques (e.g., functional magnetic resonance imaging, functional near infrared spectroscopy, electroencephalography, transcranial magnetic stimulation). These ways are indirect. That is, they provide further insight into verbal lie detection and this insight could further advance the development of verbal lie detection tools. Below are a few suggestions.
First, meta-analyses of brain imaging studies of deception have generally supported the idea that deception is associated with an increase in cognitive control compared with truth telling, as suggested by stronger ventrolateral, dorsolateral, and medial prefrontal cortical engagement [131,132,133]. Precise information about the activation patterns in these brain regions during tasks that require cognitive control could potentially be used to enhance verbal lie detection methods. Neuroimaging data could be used to try to maximise interference effects during an interview by figuring out which secondary tasks generate brain activation patterns that overlap the most with those produced when generating lies. Indeed, there is evidence that the degree of overlap between the neural patterns elicited by two tasks predicts the degree of behavioural interference between them. For example, [134] found that the more similar the pattern of brain activity elicited by two tasks (e.g., 2-back and tone counting), the lower the accuracy when the two tasks were performed simultaneously.
Second, instead of indirectly interfering with brain regions involved in generating lies with a secondary task, a more direct approach to potentially boost lie detection would be to transiently deactivate brain regions involved in producing lies using non-invasive brain stimulation methods such as transcranial magnetic stimulation [135]. For example, one could use Theta Burst Stimulation [136] to temporarily disrupt function in one or more brain regions that routinely show up in brain imaging studies as being activated when lying and then administer an interviewing to detect deception interview protocol. Brain stimulation effects could greatly amplify the verbal veracity differences because lie tellers should find it much more difficult (or impossible) to generate lies. Of course, special attention to neuroethical issues is needed when using these methods in a lie detection context.
Third, brain activity could also be compared when interviewees answer expected and unexpected questions, assuming that a sufficiently large number of such questions can be asked during the interview. Unexpected questions should surprise truth tellers and lie tellers in equal measure. However, it should be more difficult for lie tellers to answer the unexpected questions than the expected questions, because they can give their planned answers to the expected questions but must produce spontaneous answers to the unexpected questions. This difference in difficulty could become apparent in lie tellers’ brain activity, as already documented when comparing brain activation patterns during memorised and spontaneous lying [137]. In contrast, for truth tellers, the difference in difficulty in answering the expected and unexpected questions should be less pronounced than in lie tellers, which could also become apparent in their brain activity.
In addition, when comparing the brain activity of truth tellers and lie tellers in answering the unexpected questions, there should be greater brain activation in brain networks involved in episodic memory retrieval in truth tellers (because they will think back about the event), whereas in lie tellers, there should be greater engagement of brain networks involved in creative cognition (because they will make up an answer) [138].
Fourth, brain activation data could also be useful with the Model Statement technique. Listening to a Model Statement makes both truth tellers and lie tellers realise that they are expected to provide additional details. When realising this, truth tellers could continue listening to the Model Statement because the additional information they are going to report is already available to them. In contrast, lie tellers must start thinking about possible extra details they must report. This will shift attention away from listening to the Model Statement because they will be unable to think about what to say next and listen to the Model Statement at the same time. Measuring brain activity while interviewees are engaged in listening to a Model Statement could test this hypothesis. We expect that brain areas related to listening and verbal decoding should be more engaged in truth tellers than in lie tellers because of truth tellers’ enhanced listening to the Model Statement [139]. Brain regions involved in creative cognition should be more engaged in lie tellers than in truth tellers because of lie tellers’ need to generate additional details [138].
Fifth, cognitive neuroscientists could compare truth tellers’ and lie tellers’ brain activity when they report an event that supposedly happened a long time ago (i.e., years ago).
Truth tellers should retrieve remote autobiographical memories (when the events of interest took place), whereas lie tellers should retrieve more recent autobiographical memories (when they planned what to say during the interview). These two types of memories elicit different patterns of activation in brain regions, such as the ventromedial prefrontal cortex and the hippocampus [140], and these differences could be used to help decide whether the interviewee is engaged in lying or telling the truth. A similar pattern of results could emerge when interviewees are repeatedly interviewed about the same event. In those situations, truth tellers tend to think about the event again [141]. In contrast, lie tellers are typically concerned about consistency [142] and will think about what they have said in the previous interview [141]. Repeated interviewing should therefore, in truth tellers, activate brain areas associated with remote autobiographical memories (when the event occurred) and, in lie tellers, should activate brain areas associated with more recent autobiographical memories (e.g., the time of their first interview). The same finding could also emerge when brain activity is examined in a reverse order interview when an initial free recall is followed by the request to report the event in reverse order. Truth tellers are expected to think about the event again when recalling their experiences in reverse order, whereas lie tellers will be inclined to think about what they just said so that they can repeat themselves in reverse order.
Sixth, brain activity could also be examined using the sketching method. The request to sketch whilst recalling the event should encourage truth tellers to visualise their experiences and hence engage brain regions involved in visual mental imagery and episodic memory retrieval [143,144]. In contrast, lie tellers reporting something they have not experienced but only fabricated, often on the fly, may show increased engagement of brain regions involved in semantic memory representation and reactivation [145]. Sometimes lie tellers will use content from real experiences to design their lies. These could be detected if the lie teller uses an existing visual memory of a real event that they experienced much earlier or much later than the alleged event, as we described in the previous paragraph.
Although these neuroscience-based methods look promising, it is important to keep in mind that a critical issue for their applications is whether they provide reliable information at the level of the individual person. In other words, even if certain neural differences are measurable when averaging data over a group, they may be too small to be diagnostic at the single individual level, at least with current technology and paradigms. Determining this will require interdisciplinary work and systematic collaborative work between neuroscientists, verbal lie detection researchers, and practitioners.

Author Contributions

Writing—original draft, A.V.; Writing—review & editing, P.A.G., T.A., G.G., S.L. and R.P.F. All authors have read and agreed to the published version of the manuscript.


The time the first author spent working on this article was funded by the Centre for Research and Evidence on Security Threats (ESRC Award: ES/N009614/1).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Larson, J.A. Lying and Its Detection: A Study of Deception and Deception Tests; University of Chicago Press: Chicago, IL, USA, 1932. [Google Scholar]
  2. Reid, J.E.; Arther, R.O. Behavior symptoms of lie-detector subjects. J. Crim. Law Criminol. Police Sci. 1953, 44, 104–108. [Google Scholar] [CrossRef]
  3. Knapp, M.L.; Hart, R.P.; Dennis, H.S. An exploration of decep tion as a communication construct. Hum. Commun. Res. 1974, 1, 15–29. [Google Scholar] [CrossRef]
  4. Farwell, L.A.; Donchin, E. The ‘‘brain detector’’: P300 in the detection of deception. Psychophysiology 1986, 23, 434. [Google Scholar]
  5. Farwell, L.A.; Donchin, E. Event-related brain potentials in interrogative polygraphy: Analysis using bootstrapping. Psychophysiology 1988, 25, 445. [Google Scholar] [CrossRef]
  6. Rosenfeld, J.P.; Cantwell, B.; Nasman, V.T.; Wojdac, V.; Ivanov, S.; Mazzeri, L. A modified, event-related potential-based guilty knowledge test. Int. J. Neurosci. 1988, 24, 157–161. [Google Scholar] [CrossRef] [PubMed]
  7. Vrij, A.; Fisher, R.P.; Leal, S. How researchers can make verbal lie detection more attractive for practitioners. Psychiatry Psychol. Law 2022. [Google Scholar] [CrossRef]
  8. Vrij, A.; Mann, S.; Kristen, S.; Fisher, R. Cues to deception and ability to detect lies as a function of police interview styles. Law Hum. Behav. 2007, 31, 499–518. [Google Scholar] [CrossRef]
  9. Loftus, E.F. Intelligence gathering post-9/11. Am. Psychol. 2011, 66, 532–541. [Google Scholar] [CrossRef]
  10. Kraut, R.E. Verbal and nonverbal cues in the perception of lying. J. Pers. Soc. Psychol. 1978, 36, 380–391. [Google Scholar] [CrossRef]
  11. Hauch, V.; Blandón-Gitlin, I.; Masip, J.; Sporer, S.L. Are computers effective lie detectors? A meta-analysis of linguistic cues to deception. Personal. Soc. Psychol. Rev. 2015, 19, 307–342. [Google Scholar] [CrossRef]
  12. Vrij, A. Detecting Lies and Deceit: Pitfalls and Opportunities, 2nd ed.; John Wiley and Sons: Chichester, UK, 2014. [Google Scholar]
  13. Trankell, A. Vittnespsykologins Arbetsmetoder; Liber: Stockholm, Sweden, 1963. [Google Scholar]
  14. Arntzen, F. Psychologie der Zeugenaussage; Hogrefe: Göttingen, Germany, 1970. [Google Scholar]
  15. Undeutsch, U. Beurteilung der Glaubhaftigkeit von Aussagen. In Handbuch der Psychologie Vol. 11: Forensische Psychologie; Undeutsch, U., Ed.; Hogrefe: Göttingen, Germany, 1967; pp. 26–181. [Google Scholar]
  16. Ceci, S.J.; Bruck, M. Jeopardy in the Courtroom; American Psychological Association: Washington, DC, USA, 1995. [Google Scholar]
  17. Köhnken, G.; Steller, M. The evaluation of the credibility of child witness statements in German procedural system. In The Child Witness: Do the Courts Abuse Children? Davies, G., Drinkwater, J., Eds.; (Issues in Criminological and Legal Psychology, no. 13); British Psychological Society: Leicester, UK, 1988; pp. 37–45. [Google Scholar]
  18. Steller, M.; Köhnken, G. Criteria-Based Content Analysis. In Psychological Methods in Criminal Investigation and Evidence; Raskin, D.C., Ed.; Springer: New York, NY, USA, 1989; pp. 217–245. [Google Scholar]
  19. Steller, M. Recent developments in statement analysis. In Credibility Assessment; Yuille, J.C., Ed.; Kluwer: Deventer, The Netherlands, 1989; pp. 135–154. [Google Scholar]
  20. Köhnken, G.; Manzanero, A.L.; Scott, T. Statement validity assessment: Myths and limitations. Anu. Psicol. Jurídica 2015, 25, 13–19. [Google Scholar] [CrossRef]
  21. Raskin, D.C.; Esplin, P.W. Statement Validity Assessment: Interview procedures and content analysis of children’s statements of sexual abuse. Behav. Assess. 1991, 13, 265–291. [Google Scholar]
  22. Volbert, R.; Steller, M. Is this testimony truthful, fabricated, or based on false memory? Credibility assessment 25 years after Steller and Köhnken. Eur. Psychol. 2014, 19, 207–220. [Google Scholar] [CrossRef] [Green Version]
  23. Köhnken, G. Statement Validity Analysis and the ‘detection of the truth’. In Deception Detection in Forensic Contexts; Granhag, P.A., Strömwall, L.A., Eds.; Cambridge University Press: Cambridge, UK, 2004; pp. 41–63. [Google Scholar] [CrossRef]
  24. Amado, B.G.; Arce, R.; Fariña, F.; Vilarino, M. Criteria-Based Content Analysis (CBCA) reality criteria in adults: A meta-analytic review. Int. J. Clin. Health Psychol. 2016, 16, 201–210. [Google Scholar] [CrossRef]
  25. Amado, B.G.; Arce, R.; Fariña, F. Undeutsch hypothesis and Criteria Based Content Analysis: A meta-analytic review. Eur. J. Psychol. Appl. Leg. Context 2015, 7, 3–12. [Google Scholar]
  26. DePaulo, B.M.; Morris, W.L. Discerning lies from truths: Behavioural cues to deception and the indirect pathway of intuition. In Deception Detection in Forensic Contexts; Granhag, P.A., Strömwall, L.A., Eds.; Cambridge University Press: Cambridge, UK, 2004; pp. 15–40. [Google Scholar]
  27. Hartwig, M.; Bond, C.F. Lie detection from multiple cues: A meta-analysis. Applied Cognitive Psychology 2014, 28, 661–676. [Google Scholar] [CrossRef]
  28. Horvath, F.; Jayne, B.; Buckley, J. Differentiation of truthful and deceptive criminal suspects in behavioral analysis interviews. J. Forensic Sci. 1994, 39, 793–807. [Google Scholar] [CrossRef]
  29. Masip, J.; Herrero, C. What would you say if you were guilty? Suspects’ strategies during a hypothetical Behavior Analysis Interview concerning a serious crime. Appl. Cogn. Psychol. 2013, 27, 60–70. [Google Scholar] [CrossRef]
  30. Masip, J.; Herrero, C.; Garrido, E.; Barba, A. Is the Behaviour Analysis Interview just common sense? Appl. Cogn. Psychol. 2010, 25, 593–604. [Google Scholar] [CrossRef]
  31. Vrij, A.; Mann, S.; Fisher, R. An empirical test of the Behaviour Analysis Interview. Law Hum. Behav. 2012, 30, 329–345. [Google Scholar] [CrossRef]
  32. Ekman, P. Telling Lies: Clues to Deceit in the Marketplace, Politics and Marriage; (Reprinted in 1992 1985, 2001 and 2009); W. W. Norton: New York, NY, USA, 1985. [Google Scholar] [CrossRef]
  33. Ekman, P.; Friesen, W.V. Nonverbal leakage and clues to deception. Psychiatry 1969, 32, 88–106. [Google Scholar] [CrossRef]
  34. Vrij, A.; Hartwig, M.; Granhag, P.A. Reading lies: Nonverbal communication and deception. Annu. Rev. Psychol. 2019, 70, 295–317. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Alonso-Quecuty, M.L. Deception detection and Reality Monitoring: A new answer to an old question? In Psychology and Law: International Perspectives; Lösel, F., Bender, D., Bliesener, T., Eds.; Walter de Gruyter: Berlin, Germany, 1992. [Google Scholar]
  36. Alonso-Quecuty, M.L. Detecting fact from fallacy in child and adult witness accounts. In Psychology, Law, and Criminal Justice: International Developments in Research and Practice; Davies, G., Lloyd-Bostock, S., McMurran, M., Wilson, C., Eds.; Walter de Gruyter: Berlin, Germany, 1996; pp. 74–80. [Google Scholar]
  37. Höfer, E.; Akehurst, L.; Metzger, G. Reality monitoring: A chance for further development of CBCA? In Proceedings of the Annual meeting of the European Association on Psychology and Law, Siena, Italy, 28–31 August 1996. [Google Scholar]
  38. Johnson, M.K.; Raye, C.L. Reality Monitoring. Psychol. Rev. 1981, 88, 67. [Google Scholar] [CrossRef]
  39. Johnson, M.K.; Raye, C.L. False memories and confabulation. Trends Cogn. Sci. 1998, 2, 137–145. [Google Scholar] [CrossRef] [PubMed]
  40. Johnson, M.K.; Hashtroudi, S.; Lindsay, D.S. Source monitoring. Psychol. Bull. 1993, 114, 3. [Google Scholar] [CrossRef]
  41. Masip, J.; Sporer, S.; Garrido, E.; Herrero, C. The detection of deception with the reality monitoring approach: A review of the empirical evidence. Psychol. Crime Law 2005, 11, 99–122. [Google Scholar] [CrossRef]
  42. Sporer, S.L. The less travelled road to truth: Verbal cues in deception detection in accounts of fabricated and self-experienced events. Appl. Cogn. Psychol. 1997, 11, 373–397. [Google Scholar] [CrossRef]
  43. Sporer, S.L. Reality monitoring and detection of deception. In Deception Detection in Forensic Contexts; Granhag, P.A., Strömwall, L.A., Eds.; Cambridge University Press: Cambridge, UK, 2004; pp. 64–102. [Google Scholar]
  44. Gancedo, Y.; Fariña, F.; Seijo, D.; Vilariño, M.; Arce, R. Reality monitoring: A meta-analytical review for forensic practice. Eur. J. Psychol. Appl. Leg. Context 2021, 13, 99–110. [Google Scholar] [CrossRef]
  45. Sapir, A. The LSI Course on Scientific Content Analysis (SCAN); Laboratory for Scientific Interrogation: Phoenix, ZA, USA, 1987. [Google Scholar]
  46. Bogaard, G.; Meijer, E.H.; Vrij, A.; Broers, N.J.; Merckelbach, H. SCAN is largely driven by 12 criteria: Results from sexual abuse statements. Psychol. Crime Law 2014, 20, 430–449. [Google Scholar] [CrossRef] [Green Version]
  47. Driscoll, L.N. A validity assessment of written statements from suspects in criminal investigations using the SCAN technique. Police Stud. 1994, 17, 77–88. [Google Scholar]
  48. Smith, N. Reading between the Lines: An Evaluation of the Scientific Content Analysis Technique (SCAN); Police research series paper; UK Home Office, Research, Development and Statistics Directorate: London, UK, 2001. [Google Scholar]
  49. Bogaard, G.; Meijer, E.H.; Vrij, A.; Merckelbach, H. Scientific Content Analysis (SCAN) cannot distinguish between truthful and fabricated accounts of a negative event. Front. Psychol. 2016, 7, 243. [Google Scholar] [CrossRef] [Green Version]
  50. Nahari, G.; Vrij, A.; Fisher, R.P. Does the truth come out in the writing? SCAN as a lie detection tool. Law Hum. Behav. 2012, 36, 68. [Google Scholar] [CrossRef]
  51. Goormans, I.; Mergaerts, L.; Vandeviver, C. SCANning for truth. Scholars’ and practitioners’ perceptions on the use(fulness) of Scientific Content Analysis in detecting deception during police interviews. Psychol. Crime Law 2022. [Google Scholar] [CrossRef]
  52. Bogaard, G.; Meijer, E.H.; Vrij, A.; Broers, N.J.; Merckelbach, H. Contextual bias in verbal credibility assessment: Criteria-Based content analysis, Reality Monitoring and Scientific Content Analysis. Appl. Cogn. Psychol. 2014, 28, 79–90. [Google Scholar] [CrossRef] [Green Version]
  53. Vanderhallen, M.; Jaspaert, E.; Vervaeke, G. Scan as an investigative tool. Police Pract. Res. 2015, 17, 279–293. [Google Scholar] [CrossRef]
  54. Leal, S.; Vrij, A.; Vernham, Z.; Dalton, G.; Jupe, L.; Harvey, A.; Nahari, G. Cross-cultural verbal deception. Leg. Criminol. Psychol. 2018, 23, 192–213. [Google Scholar] [CrossRef] [Green Version]
  55. Taylor, P.J.; Larner, S.; Conchie, S.M.; Menacere, T. Culture moderates changes in linguistic self-presentation and detail provision when deceiving others. R. Soc. Open Sci. 2017, 4, 170128. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Taylor, P.J.; Larner, S.; Conchie, S.M.; van der Zee, S. Cross-cultural deception detection. In Detecting Deception: Current Challenges and Cognitive Approaches; Granhag, P.A., Vrij, A., Verschuere, B., Eds.; John Wiley & Sons: Chichester, UK, 2014. [Google Scholar]
  57. Cacuci, S.A.; Bull, R.; Huang, C.Y.; Visu-Petra, L. Criteria-Based Content Analysis in child sexual abuse cases: A cross-cultural perspective. Child Abus. Rev. 2021, 30, 520–535. [Google Scholar] [CrossRef]
  58. Vrij, A. Criteria-Based Content Analysis: A qualitative review of the first 37 studies. Psychol. Public Policy Law 2005, 11, 3–41. [Google Scholar] [CrossRef]
  59. Buck, J.A.; Warren, A.R.; Betman, S.; Brigham, J.C. Age differences in Criteria-Based Content Analysis scores in typical child sexual abuse interviews. Appl. Dev. Psychol. 2002, 23, 267–283. [Google Scholar] [CrossRef]
  60. Wells, G.L.; Loftus, E.F. Commentary: Is this child fabricating? Reactions to a new assessment technique. In The Suggestibility of Children’s Recollections; Doris, J., Ed.; American Psycholo gical Association: Washington, DC, USA, 1991; pp. 168–171. [Google Scholar]
  61. Hershkowitz, I.; Fisher, S.; Lamb, M.E.; Horowitz, D. Improving credibility assessment in child sexual abuse allegations: The role of the NICHD investigative interview protocol. Child Abus. Negl. 2007, 31, 99–110. [Google Scholar] [CrossRef]
  62. Hershkowitz, I.; Lamb, M.E.; Sternberg, K.J.; Esplin, P.W. The relationships among interviewer utterance type, CBCA scores and the richness of children’s responses. Leg. Criminol. Psychol. 1997, 2, 169–176. [Google Scholar] [CrossRef]
  63. Gumpert, C.H.; Lindblad, F. Expert testimony on child sexual abuse: A qualitative study of the Swedish approach to statement analysis. Expert Evid. 1999, 7, 279–314. [Google Scholar] [CrossRef]
  64. Vrij, A.; Mann, S.; Jundi, S.; Hillman, J.; Hope, L. Detection of concealment in an information-gathering interview. Appl. Cogn. Psychol. 2014, 28, 860–866. [Google Scholar] [CrossRef] [Green Version]
  65. Leal, A.; Vrij, A.; Deeb, H.; Fisher, R.P. Interviewing to detect omission lies. Appl. Cogn. Psychol. 2022. [Google Scholar] [CrossRef]
  66. Leins, D.; Fisher, R.P.; Ross, S.J. Exploring liars’ strategies for creating deceptive reports. Leg. Criminol. Psychol. 2013, 18, 141–151. [Google Scholar] [CrossRef]
  67. Vrij, A.; Meissner, C.A.; Fisher, R.P.; Kassin, S.M.; Morgan, A., III; Kleinman, S. Psychological perspectives on interrogation. Perspect. Psychol. Sci. 2017, 12, 927–955. [Google Scholar] [CrossRef] [Green Version]
  68. DePaulo, B.M.; Lindsay, J.L.; Malone, B.E.; Muhlenbruck, L.; Charlton, K.; Cooper, H. Cues to deception. Psychol. Bull. 2003, 129, 74–118. [Google Scholar] [CrossRef]
  69. Bond, C.F.; DePaulo, B.M. Accuracy of deception judgements. Personal. Soc. Psychol. Rev. 2006, 10, 214–234. [Google Scholar] [CrossRef]
  70. Vrij, A.; Hope, L.; Fisher, R.P. Eliciting reliable information in investigative interviews. Policy Insights Behav. Brain Sci. 2014, 1, 129–136. [Google Scholar] [CrossRef] [Green Version]
  71. Kleiner, M. Handbook of Polygraph Testing; Academic Press: San Diego, CA, USA, 2002. [Google Scholar]
  72. Honts, C.R.; Thurber, S.; Handler, M. A comprehensive meta-analysis of the comparison question test. Appl. Cogn. Psychol. 2021, 35, 411–427. [Google Scholar] [CrossRef]
  73. Iacono, W.G.; Ben-Shakhar, G. Current status of forensic lie detection with the comparison question test: An update of the 2003 National Academy of Sciences report on polygraph testing. Law Hum. Behav. 2019, 43, 86–98. [Google Scholar] [CrossRef] [PubMed]
  74. Hartwig, M.; Granhag, P.A.; Strömwall, L.; Vrij, A. Detecting deception via strategic disclosure of evidence. Law Hum. Behav. 2005, 29, 469–484. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Vrij, A.; Mann, S.; Fisher, R.; Leal, S.; Milne, B.; Bull, R. Increasing cognitive load to facilitate lie detection: The benefit of recalling an event in reverse order. Law Hum. Behav. 2008, 32, 253–265. [Google Scholar] [CrossRef] [PubMed]
  76. Granhag, P.A.; Hartwig, M. A new theoretical perspective on deception detection: On the psychology of instrumental mind-reading. Psychol. Crime Law 2008, 14, 189–200. [Google Scholar] [CrossRef]
  77. Hartwig, M.; Granhag, P.A.; Strömwall, L. Guilty and innocent suspects’ strategies during police interrogations. Psychol. Crime Law 2007, 13, 213–227. [Google Scholar] [CrossRef]
  78. Hartwig, M.; Granhag, P.A.; Strömwall, L.; Doering, N. Impression and information management: On the strategic self-regulation of innocent and guilty suspects. Open Criminol. J. 2010, 3, 10–16. [Google Scholar] [CrossRef]
  79. Vrij, A.; Mann, S.; Leal, S.; Granhag, P.A. Getting into the minds of pairs of liars and truth tellers: An examination of their strategies. Open Criminol. J. 2010, 3, 17. [Google Scholar] [CrossRef] [Green Version]
  80. Nahari, G. The applicability of the Verifiability Approach to the real world. In Detecting Concealed Information and Deception: Verbal, Behavioral, and Biological Methods; Rosenfeld, P., Ed.; Academic Press: San Diego, CA, USA, 2018; pp. 329–350. [Google Scholar] [CrossRef]
  81. Gilovich, T.; Savitsky, K.; Medvec, V.H. The illusion of transparency: Biased assessments of others’ ability to read one’s emotional states. J. Personal. Soc. Psychol. 1998, 75, 332–346. [Google Scholar] [CrossRef]
  82. Kassin, S.M.; Appleby, S.C.; Torkildson-Perillo, J. Interviewing suspects: Practice, science, and future directions. Leg. Criminol. Psychol. 2010, 15, 39–55. [Google Scholar] [CrossRef] [Green Version]
  83. DePaulo, B.M.; Kirkendol, S.E. The motivational impairment effect in the communication of deception. In Credibility Assessment; Yuille, J.C., Ed.; Kluwer: Dordrecht, The Netherlands, 1989; pp. 51–70. [Google Scholar]
  84. Buller, D.B.; Burgoon, J.K. Interpersonal deception theory. Commun. Theory 1996, 6, 203. [Google Scholar] [CrossRef]
  85. Vrij, A.; Fisher, R.; Blank, H. A cognitive approach to lie detection: A meta-analysis. Leg. Criminol. Psychol. 2017, 22, 1–21. [Google Scholar] [CrossRef] [Green Version]
  86. Vrij, A.; Fisher, R.; Blank, H.; Leal, S.; Mann, S. A cognitive approach to elicit nonverbal and verbal cues of deceit. In Cheating, Corruption, and Concealment: The Roots of Dishonest Behavior; van Prooijen, J.W., van Lange, P.A.M., Eds.; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  87. Vrij, A.; Mann, S.; Leal, S.; Fisher, R.P. Combining verbal veracity assessment techniques to distinguish truth tellers from lie tellers. Eur. J. Psychol. Appl. Leg. Context 2021, 13, 9–19. [Google Scholar] [CrossRef]
  88. Bogaard, G.; Colwell, K.; Crans, S. Using the Reality Interview improves the accuracy of the Criteria-Based Content Analysis and Reality Monitoring. Appl. Cogn. Psychol. 2019, 33, 1018–1031. [Google Scholar] [CrossRef] [Green Version]
  89. Colwell, K.; Hiscock-Anisman, C.K.; Memon, A.; Taylor, L.; Prewett, J. Assessment Criteria Indicative of Deception (ACID): An integrated system of investigative interviewing and detecting deception. J. Investig. Psychol. Offender Profiling 2007, 4, 167–180. [Google Scholar] [CrossRef]
  90. Granhag, P.A.; Hartwig, M. The Strategic Use of Evidence (SUE) technique: A conceptual overview. In Deception Detection: Current Challenges and New Approaches; Granhag, P.A., Vrij, A., Verschuere, B., Eds.; Wiley: Chichester, UK, 2015; pp. 231–251. [Google Scholar]
  91. Granhag, P.A.; Strömwall, L.A.; Willén, R.; Hartwig, M. Eliciting cues to deception by tactical disclosure of evidence: The first test of the Evidence Framing Matrix. Leg. Criminol. Psychol. 2013, 18, 341–355. [Google Scholar] [CrossRef]
  92. Hartwig, M.; Granhag, P.A.; Luke, T. Strategic use of evidence during investigative interviews: The state of the science. In Credibility Assessment: Scientific Research and Applications; Raskin, D.C., Honts, C.R., Kircher, J.C., Eds.; Academic Press: Oxford, UK, 2014; pp. 1–36. [Google Scholar]
  93. Palena, N.; Caso, L.; Vrij, A.; Nahari, G. The Verifiability Approach: A meta-analysis. J. Appl. Res. Mem. Cogn. 2021, 10, 155–166. [Google Scholar] [CrossRef]
  94. Nahari, G. Verifiability approach: Applications in different judgmental settings. In The Palgrave Handbook of Deceptive Communication; Docan-Morgan, T., Ed.; Palgrave Macmillan: New York, NY, USA, 2019; pp. 213–225. [Google Scholar] [CrossRef]
  95. Vrij, A.; Nahari, G. The Verifiability Approach. In Evidence-Based Investigative Interviewing; Dickinson, J.J., Compo, N.S., Carol, R.N., Schwartz, B.L., McCauley, M.R., Eds.; Routledge Press: New York, NY, USA, 2019; pp. 116–133. [Google Scholar] [CrossRef]
  96. Hartwig, M.; Granhag, P.A. Strategic use of evidence (SUE): A review of the technique and its principles. In Interviewing and Interrogation: A Review of Research and Practice Since World War II; Oxburgh, G., Myklebust, T., Fallon, M., Hartwig, M., Eds.; Torkel Opsahl Academic Epublisher: Brussels, Belgium, 2022. [Google Scholar]
  97. Bell, B.E.; Loftus, E.F. Trivial persuasion in the courtroom: The power of (a few) minor details. J. Personal. Soc. Psychol. 1989, 56, 669–679. [Google Scholar] [CrossRef]
  98. Nahari, G.; Vrij, A.; Fisher, R.P. Exploiting liars’ verbal strategies by examining the verifiability of details. Leg. Criminol. Psychol. 2014, 19, 227–239. [Google Scholar] [CrossRef]
  99. Nisin, Z.; Nahari, G.; Goldsmith, M. Lies divorced from context: Evidence for Context Embedded Perception (CEP) as a feasible measure for deception detection. Psychol. Crime Law 2022. [Google Scholar] [CrossRef]
  100. Leal, S.; Vrij, A.; Warmelink, L.; Vernham, Z.; Fisher, R. You cannot hide your telephone lies: Providing a model statement as an aid to detect deception in insurance telephone calls. Leg. Criminol. Psychol. 2015, 20, 129–146. [Google Scholar] [CrossRef]
  101. Leal, S.; Vrij, A.; Hudson, C.; Capuozzo, P.; Deeb, H. The effectiveness of different Model Statement variants for eliciting information and cues to deceit. Leg. Criminol. Psychol. 2022, 27, 247–263. [Google Scholar] [CrossRef]
  102. Vrij, A.; Leal, S.; Fisher, R.P. Verbal deception and the Model Statement as a lie detection tool. Front. Psychiatry 2018, 9, 492. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  103. Mann, S.; Vrij, A.; Shaw, D.; Leal, S.; Ewens, S.; Hillman, J.; Granhag, P.A.; Fisher, R.P. Two heads are better than one? How to effectively use two interviewers to elicit cues to deception. Leg. Criminol. Psychol. 2013, 18, 324–340. [Google Scholar] [CrossRef]
  104. Vrij, A.; Leal, S.; Fisher, R.P.; Mann, S.; Dalton, G.; Jo, E.; Shaboltas, A.; Khaleeva, M.; Granskaya, J.; Houston, K. Sketching as a technique to elicit information and cues to deceit in interpreter-based interviews. J. Appl. Res. Mem. Cogn. 2018, 7, 303–313. [Google Scholar] [CrossRef] [Green Version]
  105. Vrij, A.; Mann, S.; Leal, S.; Fisher, R.P.; Deeb, H. Sketching while narrating as a tool to detect deceit. Appl. Cogn. Psychol. 2020, 34, 628–642. [Google Scholar] [CrossRef]
  106. Christ, S.E.; Van Essen, D.C.; Watson, J.M.; Brubaker, L.E.; McDermott, K.B. The Contributions of Prefrontal Cortex and Executive Control to Deception: Evidence from Activation Likelihood Estimate Meta-analyses. Cereb. Cortex 2009, 19, 1557–1566. [Google Scholar] [CrossRef] [Green Version]
  107. Suchotzki, K.; Verschuere, B.; Van Bockstaele, B.; Ben-Shakhar, G.; Crombez, G. Lying takes time: A meta-analysis on reaction time measures of deception. Psychol. Bull. 2017, 143, 428–453. [Google Scholar] [CrossRef] [Green Version]
  108. Van ‘t Veer, A.; Stel, M.; van Beest, I. Limited capacity to lie: Cognitive load interferes with being dishonest. Judgm. Decis. Mak. 2014, 9, 199–206. [Google Scholar] [CrossRef]
  109. Vrij, A.; Deeb, H.; Leal, S.; Fisher, R.P. The effects of a secondary task on true and false opinion statements. Int. J. Psychol. Behav. Anal. 2022, 7, 185. [Google Scholar] [CrossRef]
  110. Vrij, A.; Leal, S.; Granhag, P.A.; Mann, S.; Fisher, R.P.; Hillman, J.; Sperry, K. Outsmarting the liars: The benefit of asking unanticipated questions. Law Hum. Behav. 2009, 33, 159–166. [Google Scholar] [CrossRef]
  111. Knieps, M.; Granhag, P.A.; Vrij, A. Back to the future: Asking about mental images to discriminate between true and false intentions. J. Psychol. Interdiscip. Appl. 2013, 147, 619–640. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  112. Knieps, M.; Granhag, P.A.; Vrij, A. Repeated visits to the future: Asking about mental images to discriminate between true and false intentions. Int. J. Adv. Psychol. 2013, 2, 93–102. [Google Scholar]
  113. Mac Giolla, E.; Luke, T. Does the cognitive approach to lie detection improve the accuracy of human observers? Appl. Cogn. Psychol. 2020, 35, 385–392. [Google Scholar] [CrossRef]
  114. Colwell, K.; Hiscock-Anisman, C.K.; Fede, J. Assessment Criteria Indicative of Deception: An example of the new paradigm of differential recall enhancement. In Applied Issues in Investigative Interviewing; Cooper, B.S., Griesel, D., Ternes, M., Eds.; Springer: New York, NY, USA, 2013; pp. 259–292. [Google Scholar] [CrossRef]
  115. Fisher, R.P.; Geiselman, R.E. Memory Enhancing Techniques for Investigative Interviewing: The Cognitive Interview; Charles, C., Ed.; Thomas Publisher: Springfield, IL, USA, 1992. [Google Scholar]
  116. Memon, A.; Meissner, C.A.; Fraser, J. The cognitive interview: A meta-analytic review and study space analysis of the past 25 years. Psychol. Public Policy Law 2010, 16, 340–372. [Google Scholar] [CrossRef]
  117. Satin, G.E.; Fisher, R.P. Investigative utility of the Cognitive Interview: Describing and finding perpetrators. Law Hum. Behav. 2019, 43, 491–506. [Google Scholar] [CrossRef] [PubMed]
  118. Evans, J.R.; Michael, S.W.; Meissner, C.A.; Brandon, S.E. Validating a new assessment method for deception detection: Introducing a Psychologically Based Credibility Assessment Tool. J. Appl. Res. Mem. Cogn. 2013, 2, 33–41. [Google Scholar] [CrossRef]
  119. Vrij, A. Verbal lie detection tools from an applied perspective. In Detecting Concealed Information and Deception: Recent Developments; Rosenfeld, J.P., Ed.; Academic Press: San Diego, CA, USA, 2018; pp. 297–321. [Google Scholar] [CrossRef]
  120. Meijer, R.R.; Nering, M.L. Computerized adaptive testing: Overview and introduction. Appl. Psychol. Meas. 1999, 23, 187–194. [Google Scholar] [CrossRef]
  121. Vrij, A.; Palena, N.; Leal, S.; Casa, L. The relationship between complications, common knowledge details and self-handicapping strategies and veracity: A Meta-analysis. Eur. J. Psychol. Appl. Leg. Context 2021, 13, 55–77. [Google Scholar] [CrossRef]
  122. Vrij, A.; Leal, S.; Mann, S.; Shaboltas, A.; Khaleeva, M.; Granskaya, J.; Jo, E. Using the Model Statement Technique as a Lie Detection Tool: A Cross- cultural comparison. Psychol. Russ. State Art 2019, 12, 19–33. [Google Scholar] [CrossRef]
  123. Fisher, R.P.; Geiselman, R.E.; Raymond, D.S. Critical analysis of police interviewing techniques. J. Police Sci. Adm. 1987, 15, 177–185. [Google Scholar]
  124. Snook, B.; Keating, K. A field study of adult witness interviewing practices in a Canadian police organization. Leg. Criminol. Psychol. 2011, 16, 160–172. [Google Scholar] [CrossRef]
  125. Snook, B.; Luther, K.; Quinlan, H.; Milne, R. LET ‘EM TALK! A field study of police questioning practices of suspects and accused persons. Crim. Justice Behav. 2012, 39, 1328–1339. [Google Scholar] [CrossRef] [Green Version]
  126. Vrij, A.; Leal, S.; Mann, S.; Vernham, Z.; Brankaert, F. Translating theory into practice: Evaluating a cognitive lie detection training workshop. J. Appl. Res. Mem. Cogn. 2015, 4, 110–120. [Google Scholar] [CrossRef] [Green Version]
  127. May, L.; Raible, Y.; Gewehr, E.; Zimmermann, J.; Volbert, R. How often and why do guilty and innocent suspects confess, deny, or remain silent in police interviews? J. Police Crim. Psychol. 2022. [Google Scholar] [CrossRef]
  128. Moston, S.; Engelberg, T. The effects of evidence on the outcome of interviews with criminal suspects. Police Pract. Res. 2011, 12, 518–526. [Google Scholar] [CrossRef] [Green Version]
  129. Nahari, G.; Ashkenazi, T.; Fisher, R.P.; Granhag, P.A.; Hershkovitz, I.; Masip, J.; Meijer, E.; Nisin, Z.; Sarid, N.; Taylor, P.J.; et al. Language of Lies: Urgent issues and prospects in verbal lie detection research. Leg. Criminol. Psychol. 2019, 24, 1–23. [Google Scholar] [CrossRef]
  130. Vrij, A.; Granhag, P.A.; Leal, S.; Fisher, R.P.; Kleinman, S.M.; Ashkenazi, T. The present and future of verbal lie detection. In The Oxford Handbook of Psychology and Law; Matteo, D., Scherr, K.C., Eds.; Part of Oxford library of Psychology; Oxford University Press: Oxford, UK, 2023; pp. 565–581. ISBN 9780197649138. [Google Scholar]
  131. Lisofsky, N.; Kazzer, P.; Heekeren, H.R.; Prehn, K. Investigating socio-cognitive processes in deception: A quantitative meta-analysis of neuroimaging studies. Neuropsychologia 2014, 61, 113–122. [Google Scholar] [CrossRef] [PubMed]
  132. Delgado-Herrera, M.; Reyes-Aguilar, A.; Giordano, M. What deception tasks used in the lab really do: Systematic review and meta-analysis of ecological validity of fMRI deception tasks. Neuroscience 2021, 468, 88–109. [Google Scholar] [CrossRef]
  133. Farah, M.J.; Hutchinson, J.B.; Phelps, E.A.; Wagner, A.D. Functional MRI-based lie detection: Scientific and societal challenges. Nat. Rev. Neurosci. 2014, 15, 123–131. [Google Scholar] [CrossRef] [PubMed]
  134. Nijboer, M.; Borst, J.; van Rijn, H.; Taatgen, N. Single-task fMRI overlap predicts concurrent multitasking interference. Neuroimage 2014, 100, 60–74. [Google Scholar] [CrossRef]
  135. Ganis, G. Investigating Deception and Deception Detection with Brain Stimulation Methods. In Detecting Deception: Current Challenges and Cognitive Approaches; Granhag, P.A., Vrij, A., Verschuere, B., Eds.; Wiley: Chichester, UK, 2014; pp. 253–268. [Google Scholar] [CrossRef]
  136. Lowe, C.J.; Manocchio, F.; Safati, A.B.; Hall, P.A. The effects of theta burst stimulation (TBS) targeting the prefrontal cortex on executive functioning: A systematic review and meta-analysis. Neuropsychologia 2018, 111, 344–359. [Google Scholar] [CrossRef] [PubMed]
  137. Ganis, G.; Kosslyn, S.M.; Stose, S.; Thompson, W.L.; Yurgelun-Todd, D.A. Neural correlates of different types of deception: An fMRI investigation. Cereb. Cortex 2003, 13, 830–836. [Google Scholar] [CrossRef] [Green Version]
  138. Beaty, R.E.; Kenett, Y.N.; Christensen, A.P.; Rosenberg, M.D.; Benedek, M.; Chen, Q.; Fink, A.; Qiu, J.; Kwapil, T.R.; Kane, M.J.; et al. Robust prediction of individual creative ability from brain functional connectivity. Proc. Natl. Acad. Sci. USA 2018, 115, 1087–1092. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  139. Vanthornhout, J.; Decruy, L.; Francart, T. Effect of task and attention on neural tracking of speech. Front. Neurosci. 2019, 13, 977. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  140. Bonnici, H.M.; Maguire, E.A. Two years later—Revisiting autobiographical memory representations in vmPFC and hippocampus. Neuropsychologia 2018, 110, 159–169. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  141. Granhag, P.A.; Strömwall, L.A. Repeated interrogations–Stretching the deception detection paradigm. Expert Evid. 1999, 7, 163–174. [Google Scholar] [CrossRef]
  142. Vredeveldt, A.; van Koppen, P.J.; Granhag, P.A. The inconsistent suspect: A systematic review of different types of consistency in truth tellers and liars. In Investigative Interviewing; Bull, R., Ed.; Springer Science and Business Media: New York, NY, USA, 2014; pp. 183–207. [Google Scholar] [CrossRef]
  143. Kosslyn, S.M.; Ganis, G.; Thompson, W.L. Neural Foundations of Imagery. Nat. Rev. Neurosci. 2001, 2, 635–642. [Google Scholar] [CrossRef]
  144. Pearson, J.; Naselaris, T.; Holmes, E.A.; Kosslyn, S.M. Mental Imagery: Functional Mechanisms and Clinical Applications. Trends Cogn. Sci. 2015, 19, 590–602. [Google Scholar] [CrossRef] [Green Version]
  145. Binder, J.R.; Desai, R.H. The neurobiology of semantic memory. Trends Cogn. Sci. 2011, 15, 527–536. [Google Scholar] [CrossRef]
Table 1. A comparison between the CBCA, RM, and SCAN tools.
Table 1. A comparison between the CBCA, RM, and SCAN tools.
Do the methods use clusters of variables?YesYesYes
Is the method standardised?YesYesNo
Is there empirical support for the method?Yes, d = 0.78 in children and d = 0.55 in adultsYes, d = 0.55 (adults only)No
Is the method widely researched across labs?YesYesYes
Is the method tested in non-WEIRD cultures?Probably not oftenProbably not oftenProbably not often
Is it used by practitioners?YesNoYes
Is the method easy to learn and use for practitioners?NoYesNo
Is an underlying rationale provided why the method should work?Yes. Lie tellers are unable and unwilling to report as many details as truth tellersYes. Truth tellers’ memory differs from lie teller’s memoryNo
Does the method involve an interview protocol to elicit differences between truth tellers and lie tellers?NoNoNo
Note: WEIRD: Western, educated, industrialised, rich, and democratic.
Table 2. A comparison between SUE, VA, CCA, and RI.
Table 2. A comparison between SUE, VA, CCA, and RI.
Do the methods use clusters of variables?NoYesYesYes
Is the method standardised?Yes in terms of procedure Yes in terms of procedure and questions askedYes in terms of procedure and questions askedYes in terms of procedure and questions asked
Is there empirical support for the method?d = 1.06 (statement – evidence consistency)g = 0.42 (verifiable details)
g = 0.80 (verifiable details after Information Protocol is implemented)
d = 42 (comparing CCA conditions with control conditions)
Observers: 48% accuracy in standard interviews and 60% in CCA interviews (76% if observers are knowledgeable about cues)
75% accuracy
Is the method widely researched across labs?Dominated by Granhag and Hartwig and colleaguesDominated by Nahari and Vrij and colleagues Dominated by Vrij and colleaguesDominated by Colwell and colleagues
Is independent evidence necessary?YesYesNoNo
Is the method tested in non-WEIRD cultures?NoNoYes, part of itNo
Is it used by practitioners?YesYesYesYes
Is the method easy to learn and use for practitioners?NoNoNoNo
Is an underlying rationale provided why the method should work?Truth tellers: Be forthcoming
Lie tellers: Avoid reporting incriminating evidence
Truth tellers: Be forthcoming
Lie tellers: Avoid reporting incriminating evidence
Truth tellers: Be forthcoming
Lie tellers: Keep stories simple
Truth tellers: Be forthcoming
Lie tellers: Keep stories simple
Can it be used all the time? Only when evidence is available Only when evidence is potentially available YesYes
When should it be used?When evidence is availableWhen evidence is potentially availableWhen no evidence can be obtainedWhen no evidence can be obtained
Note: WEIRD: Western, educated, industrialised, rich, and democratic.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vrij, A.; Granhag, P.A.; Ashkenazi, T.; Ganis, G.; Leal, S.; Fisher, R.P. Verbal Lie Detection: Its Past, Present and Future. Brain Sci. 2022, 12, 1644.

AMA Style

Vrij A, Granhag PA, Ashkenazi T, Ganis G, Leal S, Fisher RP. Verbal Lie Detection: Its Past, Present and Future. Brain Sciences. 2022; 12(12):1644.

Chicago/Turabian Style

Vrij, Aldert, Pär Anders Granhag, Tzachi Ashkenazi, Giorgio Ganis, Sharon Leal, and Ronald P. Fisher. 2022. "Verbal Lie Detection: Its Past, Present and Future" Brain Sciences 12, no. 12: 1644.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop