Next Article in Journal
Language Learning in the Wild: The L2 Acquisition of English Restrictive Relative Clauses
Next Article in Special Issue
Interactive Functions of Palm-Up: Cross-Linguistic and Cross-Modal Insights from ASL, American English, LSFB and Belgian French
Previous Article in Journal
Anglicizing Humor in a Spanish Satirical TV Show—Pragmatic Functions and Discourse Strategies
Previous Article in Special Issue
The Role of Self-Adaptors in Lexical Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Dual Functions of Adaptors

by
Renia Lopez-Ozieblo
Department of English and Communication, Faculty of Humanities, The Hong Kong Polytechnic University, Hong Kong, China
Languages 2025, 10(9), 231; https://doi.org/10.3390/languages10090231
Submission received: 30 June 2025 / Revised: 30 August 2025 / Accepted: 4 September 2025 / Published: 10 September 2025
(This article belongs to the Special Issue Non-representational Gestures: Types, Use, and Functions)

Abstract

Adaptors, self-touching movements that supposedly lack communicative significance, have often been overlooked by researchers focusing on co-speech gestures. A significant complication in their study arises from the somewhat ambiguous definition of adaptors. Examples of these movements include self-manipulations like scratching a leg, bringing a hand to the mouth or head, and fidgeting, nervous tics, and micro hand or finger movements. Research rooted in psychology indicates a link between adaptors and negative emotional states. However, psycholinguistic approaches suggest that these movements might be related to the communicative task. This study analyzes adaptors in forty Cantonese speakers of English as a second language in monologues and dialogues in face-to-face and online contexts, revealing that adaptors serve functions beyond emotional expression. Our data indicate that adaptors might have cognitive functions. We also identify micro-movements, flutter-like adaptors or “flutters” for short, that may have interactive functions conveying engagement. These findings challenge the traditional view of adaptors as purely non-communicative. Participants’ self-reports corroborate these interpretations, highlighting the complexity and individual variability in adaptor use. This study advocates for the inclusion of adaptors in gesture analysis, which may enrich understanding of gesture–speech integration and cognitive and emotional processes in communication.

1. Introduction

When speaking, either during interactions or monologues, most speakers will produce a range of hand and arm movements, some of which are obviously directly related to the speech, either to its semantic content—such as when drawing in the air the shape of an object—or to the discourse, for example when indicating the inevitability of an event by turning both palms up towards the interlocutor, maybe with a shrug. These are labeled speech-gestures, and include referential (related to the semantic content) and pragmatic gestures (related to the discourse) (Lopez-Ozieblo, 2020). However, most speakers also produce other movements that have traditionally been excluded from the speech-gesture category as they are thought to be unconscious responses to bodily needs or environmental demands, including coping with anxiety and stress, and devoid of meaning (Ekman & Friesen, 1969, 1972). These are commonly referred to as adaptors (or adapters in American English) and have been defined as “acts of contact and/or manipulation with a part of the speaker’s body, or with objects, or with other persons” (Maricchiolo et al., 2012, p. 416).
There are a number of assumptions in this definition that need to be explored further, specifically whether adaptors are “devoid of meaning” and whether all manipulations of a body part or objects are the same. Another aspect to explore is their assumed negative affective nature. Their relationship with emotions has mostly been studied from a psychological perspective connoting a negative inference to their production, which should also be questioned.
Gesture studies often leave out adaptors in their analysis of speakers’ gestures as these are not considered to be co-speech gestures (e.g., Aslan et al., 2024; Guo et al., 2024; Sahlender & ten Hagen, 2023). This study, on the other hand, focused on them and explored whether all gestures labeled adaptors are the same, or whether some might be related to cognitive processing and others might have a different function more closely associated with the interaction. If we can confirm this hypothesis, we will be making a strong case for the inclusion of adaptors in gesture analysis, which may prompt a re-examination of previous studies with ambiguous results.

2. Literature Review

Speech-gestures, for the purposes of this study, are communicative movements of the fingers, hands, and arms that occur with speech. Initial categorizations of gesture followed Ekman and Friesen’s (1969) classification: emblems, codified gestures not needing speech to be understood (e.g., the thumbs up sign); illustrators, movements usually co-occurring with speech, used to describe or add emphasis (e.g., drawing a shape in the air or turning the palms up towards the interlocutor to indicate inevitability); regulators, gestures that control the flow of the communication (e.g., extending an open palm up towards an interlocutor to give them the turn), affect displays (e.g., tense hand movements), and adaptors. Many current categorizations simplify these and follow Kendon’s later categorization (for a detailed explanation of the various categories see Kendon, 2004 and McNeill, 2008). Although recently, scholars are advocating for a categorization based on the function of the gesture (Graziano & Gullberg, 2024), either referential or pragmatic. Referential gestures illustrate speech-related semantic concepts while pragmatic gestures include those with interactive, metadiscursive, and cognitive functions (for a detailed description, see Lopez-Ozieblo, 2020). However, adaptors continue to be a category apart from speech-gestures. These unconscious actions are thought to be devoid of overt communicative intent and cover self-manipulation movements like touching, rubbing, or scratching (Ekman & Friesen, 1969).

2.1. Methodological Considerations with Adaptors

One of the difficulties in studying adaptors is their ambiguous definition (Kendon, 2004) and nomenclature. Other terms for adaptors include “body-focused movements” (Freedman & Hoffman, 1967; Freedman, 1972), “self-touching movements” (Kimura, 1976; Harrigan, 1985), “manipulative gestures” (Edelmann & Hampson, 1979), “non-signaling gestures” (Waxer, 1977), “self-manipulators” (Rosenfeld, 1966), and “speech non-linked gestures (SNG)” (Maricchiolo et al., 2012). Generally, all of these terms refer to movements involving manipulation of one’s own body parts or external objects, which are considered peripheral or unrelated to the primary task or ongoing activity (Mehrabian & Friedman, 1986). Movements to achieve practical goals, such as removing one’s glasses or cleaning them, are not considered speech gestures or adaptors—these are instrumental actions that involve the task-oriented manipulation of objects (Harrigan et al., 1987).
Adaptors can be categorized based on their movement targets into self-adaptors and hetero-adaptors (Ekman & Friesen, 1969; Maricchiolo et al., 2012). Self-adaptors involve unconscious touching of one’s own body, typically stimulating the skin surface, often using the hands—for example, touching the hair or scratching an arm (Freedman, 1972). Hetero-adaptors involve contact with external entities and are further divided into object-addressed adaptors, which include handling inanimate objects like clothing or a hair tie-band, and person-addressed adaptors, involving touching another individual, such as placing a hand on someone’s arm (Ekman & Friesen, 1969; Maricchiolo et al., 2012). Freedman (1972) introduced an additional subclassification of adaptors: discrete and continuous body-touching movements. Discrete adaptors are brief, lasting less than three seconds, typically directed towards the face or head, while continuous adaptors involve longer, repetitive actions like scratching or rubbing hands, also referred to as fidgeting when appearing to be repetitive and restless movements such as rubbing the hands together or tugging at one’s sleeves cuffs (Mehrabian & Friedman, 1986).
The ambiguity lies in the iconicity of some discrete adaptors that could be considered to be somewhat conventionalized, having culturally agreed-upon meanings, or with a somewhat iconic interpretation such as bringing a hand to the mouth or head while searching for a word or when thinking. It is not always clear in the literature whether these are categorized as adaptors, emblems—conventionalized gestures (Kendon, 2004)—or speech related gestures of a thinking nature, also referred to as “thinking gestures” (Kosmala, 2024). Many of these could be interpreted as referential or even pragmatic (indicating turn-holding). While a hand to the head such as the one shown in Figure 1 could easily be interpreted as a thinking gesture when accompanied with “hmm, let me think”, there are other, more ambiguous ones that could also have the same functionality (see Figure 2).
The other ambiguity lies in the nature of continuous adaptors including fidgeting behaviors—which are thought to arise from physical unease or anxiety (Mahmoud et al., 2013)—as well as nervous tics. Fidgeting is characterized by sustained, spontaneous, involuntary actions that involve manipulating one’s body or nearby objects and occurs alongside another activity without being directly linked to it (Carriere et al., 2013; Mehrabian & Friedman, 1986; Seli et al., 2014). These actions tend to be repetitive, such as playing with jewelry or hair, or opening and closing the two index fingers with the other fingers interlaced (Figure 3).
A discrete adaptor usually has one distinct hand-to-other body part point of contact that is easy to isolate (Figure 1), while continuous adaptors or fidgeting will be a series of sustained movements with repeated points of contact that could last for a whole speaking turn and even a whole speaking session (Figure 3). Cienki (2025) observed these adaptors in interpreters and described them as “micro-movements of tension and relaxation of the fingers, sometimes as one hand was gripping the other. […] Most of these self-adapters [sic] were sustained in nature over varying lengths of time” (p. 39). The question is how to report these gestures. In studies focusing on gesture–speech unity, gesture production is most often reported in relation to the corresponding speech, usually as gestures per hundred words (e.g., Canarslan & Chu, 2024; Clough & Duff, 2020; Wray et al., 2017). However, when reporting continuous adaptors, frequency measures based on the number of words spoken might not convey their use as effectively as their overall duration.

2.2. Functions

Adaptors have been linked to negative affective states, serving as inadvertent expressions of internal emotional states, signaling emotional arousal (Ekman & Friesen, 1969, 1972). This perspective finds support in earlier clinical and psychoanalytic research by Boomer and Dittmann (1962) and Mahl (1968), who suggested that self-touching hand movements may unveil unconscious thoughts and emotions, such as stress or anxiety.
Adaptors of a fidgeting nature are frequently associated with attention-related phenomena such as vigilance, boredom, and cognitive load (Ricciardi et al., 2019). Fidgeting is thought to act as a compensatory behavior when a person becomes disengaged from a task (Mehrabian & Friedman, 1986; Seli et al., 2014), introducing variability through aimless movements during dull tasks (Ricciardi et al., 2019).
Aside from associating them with boredom, studies have delved into adaptors as reflections of other human psychological conditions, particularly stress and anxiety (Ekman & Friesen, 1969, 1972; Germana, 1969; LeCompte, 1981; Lin et al., 2021; Mahl, 1968; Scherer et al., 2013). Studies have highlighted a heightened occurrence of adaptors in anxiety-inducing environments (Germana, 1969; LeCompte, 1981). Lin et al. (2021) introduced a model integrating adaptor cues for the analysis of full-body interview videos alongside self-reported distress labels. Their study demonstrated the model’s effectiveness in forecasting depression and anxiety, emphasizing the potential significance of adaptors as early indicators of negative affect.
However, not all findings consistently support this relationship. Testing the reactions of 31 subjects to anxiety-inducing stimuli compared to neutral conditions, Heaven and McBrayer (2000) did not find differences in adaptors between the conditions. There were gender-related differences (men engaged in significantly more self-touching behaviors overall) and function-related ones (participants displayed more adaptors while answering questions than when passively listening).
Nicoladis et al. (2022) found individual differences in speakers’ gestures, identifying a relationship between the frequency of adaptors and representational gestures, suggesting idiosyncrasy factors affecting adaptor production. Personality might also influence how an individual gestures, or how they are affected by stress. Sensitivity to stress, low self-confidence, and other negative emotions like anxiety, sadness, and irritability are measured under the neuroticism trait. Positive correlations between neuroticism levels and the production of adaptors have been reported in a number of psycholinguistic and psychological studies (Cuñado Yuste, 2017; Ekman & Friesen, 1972; Mehrabian & Friedman, 1986; Pang et al., 2022). However, Campbell and Rushton (1978) and Spille et al. (2022) did not find any correlation, while Pang et al. (2022) further noted that the correlation was with state anxiety (that related to the task) rather than with trait anxiety (related to personality). Lopez-Ozieblo (2024) did not find any correlations with adaptors either, although when sustained repetitive micro-movements were separated and their production reported in terms of their duration, the study found a significant although low-effect positive correlation between these—referred to as “flutter-like adaptors” (“flutters” for short)—and neuroticism. A significant negative correlation was also found between flutters and age. These contradictory results could be related to procedural issues: differences in tasks, what is counted as an adaptor, and how it is measured (per word/minute/occurrence).
Rather than signaling stress, another theory maintains that adaptors are strategic gestures helping speakers in “self-soothing and maintaining one’s mental focus” (Cienki, 2024, p. 231). Recent clinical studies have reported correlations between anxiety and adaptors, suggesting that adaptors might not signify anxiety but instead represent strategies employed by speakers to regulate stress, thus reducing negative emotions (Mohiyeddini & Semple, 2013; Mohiyeddini et al., 2015; Pang et al., 2022). Despite their focus on anxiety, Ekman and Friesen (1969) also proposed that adaptors in interactions may convey information to observers, potentially affecting ongoing communication dynamics. In addition to stress, semantic and syntactic structures in speech might also play a pivotal role in the production of adaptors (Freedman, 1977). Cienki (2024), studying interpreters’ adaptors divided into self- and other-object adaptors, reported almost half of the gestures observed to be adaptors (mostly self-adaptors), often in alternation with pragmatic gestures. Cienki describes this cycle as reflecting the inner and outer dialogue of the interpreters, with the large proportion of adaptors helping them to concentrate and ease the associated stress of the task. In a second study, Cienki (2025) also reported a majority of pragmatic gestures and adaptors in interpreters, further suggesting these aided in the presentation of ideas and overall speech production.
Barroso et al. (1980) and Harrigan (1985) also noted that adaptors can function as substitutes for communication difficulties arising from interference in attention focus, thought organization, or verbal expression. Freedman (1972) had previously reported that individuals grappling with verbal expression tended to exhibit continuous hand-to-hand activities. Consequently, it would seem that adaptors might also serve cognitive functions. Yet, a limited number of studies have explored this relationship.
Other studies suggest that the perception of adaptors, and their production, varies with factors such as social skills, conversational context, and cultural differences (Koda & Mori, 2014; Ishioh & Koda, 2016), presumably also by task (e.g., monologues vs. dialogues). At a communicative level, Chartrand and Bargh (1999) observed that unconscious imitation of interlocutors’ adaptors can enhance interaction smoothness and connectedness. This was confirmed by a neurological study by Chui et al. (2018), who compared N400 signatures of adaptors, iconical gestures, and emblems, finding that the processing of adaptors is not effortless. They concluded that adaptors are processed by interlocutors regardless of their form and meaning, therefore they are likely to have some communicative function and are intended to align with the speech, contradicting Genova (1974), who posited that interlocutors pay little attention to adaptors as these are not speech-related gestures.
At the other end of the “boredom continuum”, Barroso et al. (1980) noted that self-touching movements were more likely to occur in distracting environments and situations with high attentional demands, including topic shifts, suggesting they might serve as a cognitive strategy to maintain focus. Harrigan (1985) delved deeper into this perspective by examining self-stimulation movements during medical interviews and observed a link between adaptors and conversational dynamics. Adaptors were found to increase during speech disfluencies and turn-taking instances, indicating their role in sustaining focus and signaling readiness to interact.
Żywiczyński et al. (2017), in a study analyzing conversations between dyads, corroborated Harrigan’s findings, observing an increase in adaptors during transitions. G. Li (2023) also identified that more adaptors are produced close to turn transitional borders in speech (clause beginning and end). Adaptors seem to intensify during changes from inactivity to activity in response to new stimuli and shifts in narrative content, hinting that adaptors may play a role not only to signal turn-taking, but also changes within the narrative (Germana, 1969), possibly serving as indicators of brain activation and readiness to provide information.
More recent research by Skogmyr Marian and Pekarek Doehler (2022) has confirmed the prevalence of adaptors during solitary word searches, with specific attention to self-touching movements such as face scratching and finger tapping. These “thinking gestures” are suggested to visually represent the cognitive effort expended during word retrieval, aiding speakers in maintaining control during search sequences and mitigating interference from co-participants. While Stam (2001) identified that participants often used iconic gestures during disfluencies to indicate they were thinking, Kosmala (2024) and Skogmyr Marian and Pekarek Doehler (2022) reported these as adaptors, particularly discrete face-touching actions. In both instances, we believe these gestures to be similar to that shown in Figure 1, signaling to the interlocutor “I’m thinking”. These often occurred alongside gaze-aversion gestures, itself indicating the retrieval of information from memory (Morency et al., 2006).
The literature reveals that adaptors, often treated as signs of negative emotions, might occupy a more complex role in human interaction. While early research primarily linked adaptors to affective states such as stress and anxiety, more recent studies suggest they also serve important cognitive and communicative functions. Moreover, neurological and observational evidence indicates that adaptors are perceived and processed by interlocutors, contributing to the dynamics of conversation and interactional alignment.
However, inconsistencies in definitions, measurement approaches, and contextual factors continue to challenge a unified understanding of adaptors. The present study seeks to address some of these issues. Based on previous observations (Lopez-Ozieblo, 2024) and drawing on Cienki’s (2025) identification of micro-gestures, we separated flutter-like adaptors (hereafter “flutters”) from other adaptors (hereafter “other adaptors”) to acknowledge their distinct characteristics. Flutters are defined here as brief, subtle finger or hand micro-movements that are often ambiguous in function but clearly synchronized with the prosody of speech. In contrast, other adaptors include continuous, repetitive actions like scratching or rubbing, as well as discrete movements such as pushing glasses up a nose or bringing a hand to the mouth (Ekman & Friesen, 1969; Freedman, 1972). Flutters appear to occupy a different role from that of other adaptors, potentially linked to speech rhythm, rather than merely serving as unconscious self-regulatory behaviors as reported in past studies of adaptors. Flutters were measured in terms of their duration, while other adaptors were measured in terms of their frequency per hundred words. The hypothesis was that if they address different functions, we would see differences in the production of flutters and other adaptors by task and context.
To examine the potential relationship between flutters and other adaptors with stress, we manipulated the experimental conditions by employing two distinct tasks and assessed gestures at two separate time points. As context-related studies comparing online vs. face-to-face (F2F) interactions suggest a decrease in anxiety online (Caplan, 2007; Pierce, 2009), we hypothesize that if flutters and other adaptors are related to stress, there would be fewer gestures in online interactions than in F2F ones. As familiarity is thought to reduce task stress (Skehan et al., 2012), we hypothesize that there would be fewer flutters and other adaptors in subsequent sessions as participants become familiar with the task protocol.
If, on the other hand, flutters and other adaptors are related to the interlocutor and affected by the dynamics of the interaction, we also hypothesize that there might be a difference in gesture production in monologues vs. dialogues. Our hypothesis was that there would be more flutters and other adaptors in dialogues as there was an interactive element, not present in monologues, where flutters and other adaptors would be used to mark turn-taking as well as turn-holding (thinking gestures) (Barroso et al., 1980; Harrigan, 1985; Żywiczyński et al., 2017). In addition, participants had noted in the post-treatment interviews that they found the dialogues more stressful than the monologues, which might contribute to gesture production. If these gestures are also used to mark transitional boundaries in speech (G. Li, 2023), we also hypothesized that, in dialogues, there would be more flutters and other adaptors during speaking rather than listening roles. If confirmed, this would corroborate the interactional dynamic function of these gestures (rather than answering bodily needs, which would presumably occur when someone is speaking as well as listening).
Therefore, our research questions on flutter and other adaptors production were (1) whether the context, online vs. face-to-face, affected their production; (2) whether production differed between monologues and dialogues; (3) whether a reduction in stress with task familiarity reduced their production; and (4) whether production differed with speaking and listening roles in dialogues.

3. Materials and Methods

To answer these questions, we extracted data from an existing corpus of adaptors produced by Cantonese speakers of English as a second language. Participants were involved in up to 6 sessions over 3 years, although not all participants attended all sessions. This study is based on the quantitative data collected from 40 of the participants divided into three different data sets, which were selected to answer the research questions.

3.1. Participants

Participants answered a call to participate in a three-year-long study. Some were able to participate face-to-face, while others started online due to the COVID pandemic. For this study, we randomly selected 10 online pairs and all of the F2F pairs (9 in total) who had completed the second session. One participant in one of the F2F pairs was found not to be a Cantonese mother tongue (L1) speaker, and this pair was replaced by another who had participated in session 1. A second pair from session 1 was added to make ten (the tasks were similar). The 40 participants also narrated a cartoon individually; however, technical issues were identified in the data of one of the online participants, and this was excluded from the analysis.
All participants reported to be Cantonese L1 speakers who had started to learn English at kindergarten or shortly after. Some had attended English medium of instruction, and others Cantonese schools. They were all either university students or recent graduates from higher institutions in Hong Kong. Proficiency was evaluated by three independent certified proficiency evaluators, all with Hong Kong experience in oral proficiency testing of English as a second language.
The participants were told that the study explored their communicative behaviors, but no mention of gestures was made until after the last session when debriefing interviews were carried out to explain the purpose of the study. The study was approved by the Institutional Review Board of the Hong Kong Polytechnic University project numbers HSEARS20170227008 (1 to 5) and HSEARS20250128006. Participants’ consent was obtained at the beginning of the project. The participants were paid for their contribution.

3.2. Procedure

Key to this study was being able to see the participants’ hands, which was not always easy in online contexts. To solve this issue, the participants were asked to sit on their beds cross-legged about 1.5 m from the camera to capture most of their bodies. In F2F sessions, the participants sat side-by-side on yoga mats, also cross legged, facing the researcher (see Figure 4). We know of at least one other study where participants were sitting on the floor (H. Li, 2025). Although this is not a common approach, the participants seemed to get used to this position. If it impacted gesture production, it should have had the same effect across conditions. Although the F2F participants wore masks, it is unlikely that this substantially influenced their gesture production patterns vs. the online behavior, as given the prolonged and widespread use of masks prior to and during the years of the study, the participants were highly accustomed to mask-wearing. We believe that any gestural patterns established while wearing masks would have been generalized to unmasked interactions as the motor routines involved in gesture production would largely remain consistent (however, this is untested).
The sessions were led by the author with a second researcher listening via a Zoom recording (even when F2F) but with the video turned off. The sessions followed the protocol of the Cambridge B2 Oral proficiency test. They began with two participants together who were asked some warm up questions and then had to describe a photograph and discuss a topic for 5 min. In the discussion, participants had to consider a question (e.g., How important are these things for keeping fit and healthy?) and rank in order of importance a series of suggestions (e.g., sleeping 8 h every day; eating at regular times). This data was analyzed under the condition of “dialogues”. After the discussion, the pairs were split up and each individual asked to watch half an episode of a Tweety and Sylvester TV cartoon episode. The participants were told they could watch the cartoon as many times as necessary in order to be able to narrate it in English in as much detail as possible. They were not allowed to take or use notes when narrating the story. This data was analyzed under the condition of “monologues”.
Oral proficiency was evaluated based on the monologues and evaluated following a multi-level proficiency scale, based on the Common European Framework of Reference for Languages (Council of Europe, 2001) and the Cambridge scales, that allowed evaluators to score from A1 to C2 levels using a 1 to 120 scale. Parity checks were carried out on 10% of the data and the Many-Facet Rasch Model (MFRM), an extension of the Rasch model (Myford & Wolfe, 2003). Facet calculations for inter-rater and intra-rater consistencies showed these to be above 85%. The average proficiency level was 85.9, on a scale of 1 to 120, which corresponds to a low C1 level, with the proficiency range varying between 69 (low B2) and 107 (low C2).

3.3. Analysis

All of the sessions were video recorded, the speech was transcribed, and the gestures were annotated and coded as flutters, other adaptors, or other co-speech gestures using PRAAT and ELAN (Sloetjes, 2017). This study reports only on the so-called non-speech gestures. Other adaptors accounted for about 10% of all of the gestures produced by the participants (excluding flutters), and flutter duration accounted for an additional 15% of the time spent gesturing (see Table 1). The data was transcribed and analyzed by one rater and then a second one (the author) checked all of the speech transcriptions (no disagreements found) and 80% of the gestures, disagreeing with an average of 5% of gesture transcriptions. Most of these were related to a handful of individuals and were resolved after discussions between the two raters, or in discussions between the second rater and a third one.
As indicated in the literature review, the ambiguous nature of adaptors has made their analysis difficult. To make our study replicable, we specify below the movements annotated as flutters and other adaptors.
Other adaptors (regardless of their timing):
  • Pushing glasses up (most common in our corpus, as the majority of our speakers wore glasses) (Figure 5a).
  • Brushing away or smoothing hair (this can be a longer gesture if slow) (Figure 5b).
  • Touching ear/nose/face mask (Figure 4).
  • Hand to neck/chin/forehead as if thinking (the hand can remain in place for longer than 3 s) (Figure 1).
  • Hand to wrist/elbow usually with a change in posture (Figure 5c).
  • Pulling sleeve/trousers cuffs (Figure 5d).
  • Distinct continuous gestures, potentially to answer environmental or bodily needs, such as scratching or rubbing hands together or a hand against another part of the body/clothing (Figure 2).
  • Handling items not related to speech, like fidgeting with a tissue/back of a mat/back of a shoe/cushion (Figure 5e).
Flutters: We differentiated these from the distinct scratching or rubbing and object-fidgeting movements mentioned above and called these “flutters” (as the movement of the fingers often resemble the wings of a fluttering butterfly). We categorized as flutters sustained, repeated micro-movements of one finger or two, usually when the hands are together that seem to correspond to the prosody of the speech (Figure 3). These are also visible during listening times when the interlocutor is talking. They often occur with head movements, suggesting they might be speech related.
We suspect flutters to be of a different nature that the movements under other adaptors. Another reason for including them as a separate category is that their sustained nature renders it impractical to report them as distinct occurrences per word. Because the action is repeated and sustained over a protracted period of time, it is thought inaccurate to report these as single occurrences over number of words. Instead, it seems more appropriate to report their duration as a function of speech duration.
The frequency of other adaptors was measured per 100 words (including interrupted and repeated words as well as fillers). Flutters were measured in terms of their duration against the duration of speech to capture their sustained nature. Gestures produced during speaking were measured according to the data associated with the speaker (number of words; duration of speech), while listening gestures were measured using data associated with the interlocutor (the speaker at the time).
Various R packages were used to analyze and report on the findings. As most of the distributions were found not to be normal, non-parametric tests were carried out to test differences between independent pairs, using Mann–Whitney or Wilcoxon signed-rank statistical tests for dependent pairs. When multiple tests were made, a Holm–Bonferroni correction was implemented.
Participant 5 was excluded for paired comparisons that included data from the online monologues. Noting that there were more women participating in the dialogues (27) and monologues (26) than men (13), we first checked the data for possible differences by sex. A Mann–Whitney U test was performed and none of the variables tested—flutters and adaptors in monologues or dialogues—were found to be significantly difference based on sex (see Table A1 in Appendix A). Proficiency was also tested between the online and F2F conditions, as was speech rate as another indication of proficiency; neither was found to be significantly difference between conditions (see Table A1 in Appendix A).

4. Results

The results are described below (see Table A1 in Appendix A) and illustrated by task and context in Figure 6. To answer the first research question, whether the context, online vs. face-to-face, affected the production of flutters and other adaptors, the data was grouped by task: monologue vs. dialogue and then compared. To answer the fourth research question the dialogue data was also compared by speaking and listening roles. Holm-Bonferroni corrections were applied to account for the multiple testing.

4.1. Context Differences in Monologues: F2F vs. Online

Mann–Whitney U tests compared behaviors between monologues F2F (N = 20) and online (N = 19). The participants produced more other adaptors per 100 words in F2F monologues (M = 3.40, SD = 2.19) than in online monologues (M = 2.25, SD = 1.86). Although this difference was not significant (U = 123, p = 0.061), the effect size suggested a small-to-medium positive association (rank-biserial r = 0.301, 95% CI [0.03, 0.63]).
Flutter rates during F2F monologues (M = 0.16, SD = 0.18) were higher than in online monologues (M = 0.10, SD = 0.11), but this difference was not statistically significant (U = 141, p = 0.173), with a small effect size (r = 0.22, 95% CI [0.0091, 0.51]).

4.2. Context Differences in Dialogues: F2F vs. Online

Mann–Whitney U tests also compared F2F (N = 20) and online (N = 20) dialogues. No significant difference was found in other adaptors when speaking between F2F (M = 2.07, SD = 1.99) and online (M = 1.19, SD = 1.10, U = 140, p = 0.107). However, other adaptors when listening were significantly higher in F2F (M = 1.90, SD = 1.86) than online (M = 0.65, SD = 0.74, U = 105, p = 0.010, adjusted p = 0.03, r = 0.26, 95% CI [0.02, 0.52]). For flutter duration, online dialogues had significantly longer flutter rates during speaking (M = 0.23, SD = 0.16) than in F2F (M = 0.12, SD = 0.12, U = 304, p = 0.004, and adjusted p = 0.01, r = 0.44, 95% CI [0.15, 0.71]). A similar pattern emerged for flutter rates during listening, although when the corrections were applied, the difference was not significant (online: M = 0.42, SD = 0.27; F2F: M = 0.24, SD = 0.17, U = 275, p = 0.043, adjusted p = 0.09, r = 0.32, 95% CI [0.04, 0.6]).

4.3. Task Differences: Dialogues vs. Monologues

To answer the second research question, whether there were task-related differences, Wilcoxon signed-rank tests compared nonverbal behaviors between dialogues (N = 39) and monologues (N = 39) in the same pairs; data from one individual had to be excluded. The results indicated that the participants produced significantly fewer other adaptors per 100 words while speaking in dialogues (M = 1.67, SD = 1.65) than in monologues (M = 2.84, SD = 2.10, W = 198, p = 0.007, adjusted p = 0.037), with a moderate negative rank-biserial correlation (r = 0.43, 95% CI [−0.02, 0.5918]). Conversely, the duration of flutters over speaking time was longer in dialogues (M = 0.16, SD = 0.12) compared to monologues (M = 0.13, SD = 0.15), but this difference was not found to be significant.

4.4. Interaction of Medium and Task

To account for the potential effect of the context, the task data was subdivided into online and F2F and Wilcoxon signed-rank tests were performed again. For online tasks, monologues (N = 19) flutter rates were lower (M = 0.10, SD = 0.11) than in dialogues (N = 19; M = 0.20, SD = 0.10, W = 166, p = 0.004, adjusted p = 0.027, r = 0.65, 95% CI [−1, −0.27]). No difference in other adaptors was observed (adjusted p = 0.29). In F2F settings, monologues (N = 20) elicited more other adaptors (M = 3.40, SD = 2.19) than in dialogues (N = 20; M = 2.07, SD = 1.99, W = 50, p = 0.040, r = 0.45, 95% CI [0.00, 0.08]), but when applying the correction Holm–Bonferroni, the p dropped to 0.17. No difference in flutter duration emerged, either (adjusted p = 0.467).

4.5. Task Familiarity

To answer the third research question, whether task familiarity affected other adaptor and flutter production, a Wilcoxon signed-rank test was carried out comparing data from two sessions of online monologues (N = 19 each). Other adaptors in the monologues decreased from session 1 (M = 2.25, SD = 1.86) to session 2 (M = 1.86, SD = 2.50), though not significantly (V = 126, p = 0.082, r = −0.42, 95% CI [−0.79, 0.00]). Flutter rates increased in session 2 (M = 0.19, SD = 0.28) compared to session 1 (M = 0.10, SD = 0.11), but this trend was not significant, either (V = 54, p = 0.10, r = 0.368, 95% CI [−0.053, 0.684]).

5. Discussion

This study reveals a fundamental distinction between flutters and other adaptors that challenges conventional classifications of these so-called non-speech gestures. Where previous research often conflated these movements under broad categories like adaptors (Cienki, 2024, 2025) or fidgeting (Frances, 1979), the current findings demonstrate that they serve markedly different functions in communication, with flutters being potentially closer to discursive speech-gestures than other adaptors. Our data suggest flutters represent a distinct category of movement—perhaps semantic or discursive gestures that have been partially inhibited. This interpretation is based on the different patterns observed in response to differences by task and communicative contexts (online vs. face-to-face). Where other adaptors showed stability across speaking roles, suggesting a role primarily cognitive, flutters increased markedly during online dialogues, suggesting they may reflect the strain of interactions. The micro-nature of flutters makes their precise form difficult to categorize—a finger movement might represent an aborted representational gesture or a prosodic beat—but their consistent patterning across conditions questions previous assumptions.
The online environment proved especially revealing for understanding flutters and other adaptors. Contrary to expectations derived from computer-mediated communication research suggesting reduced anxiety online (Caplan, 2007; Pierce, 2009)—confirmed by our participants during the interviews—the participants produced significantly longer flutter durations during online dialogues compared to face-to-face interactions, while other adaptor rates remained consistent across modalities. However, recent studies comparing anxiety in online and F2F interactions in a variety of scenarios have not found significant differences between the two conditions (Maeda, 2023; Rafieifar et al., 2024). It is possible that as we become used to online interactions, the psychological difference of the context diminishes—as reported by the more recent studies—in which case, there should be no difference in anxiety-related gesture production online vs. F2F. Indeed, anxiety does not seem to be the main factor triggering flutter and other adaptor production. Our results point to different triggers leading to their production and potentially different functionalities.
Session familiarity effects showed other adaptors decreasing with task familiarity while flutters increased, confirming that anxiety might not be directly related to their production. Although neither pattern was significant, the results suggest that flutters and other adaptors might be of a different nature. That flutters/other adaptors signal anxiety (Germana, 1969; LeCompte, 1981; Scherer et al., 2013) is not supported by the null effect obtained for task anxiety across sessions, confirming Heaven and McBrayer’s (2000) similar null findings under different anxiety conditions. The alternative is that other adaptors reflect task-specific cognitive demands that diminish with practice, while flutters might reflect motivational arousal and overall engagement with the interlocutor, for example by mimicking gesture patterns in the interlocutor (Chartrand & Bargh, 1999), rather than boredom (Seli et al., 2014; Ricciardi et al., 2019).
The comparison between monologues and dialogues further clarifies this potential functional divide. Contrary to initial hypotheses, monologues elicited more other adaptors than dialogues overall, with this pattern being particularly pronounced in face-to-face settings. The monologues in this study were based on existing information that participants had to “interpret” from a visual input to a textual one, while during the dialogues they were developing the idea as well as communicating it. We argue that in our monologues, speakers had to convey someone else’s ideas. Rather than generating new ideas, speakers have to reconstitute them from the previous visual input with little aid from the interlocutor. Dialogues, on the other hand, are developed jointly between the speaker and the interlocutor, who provides regular feedback through verbal and non-verbal backchannels, easing the cognitive load of the speaker (Cienki, 2025).
Theoretical implications emerge clearly from these patterns. Flutters appear to be more sensitive to communicative challenges related to interactions, serving as indicators of interactive difficulty or attention maintenance depending on context. Other adaptors, by contrast, seem to serve primarily cognitive self-regulatory functions, particularly during demanding monologic tasks requiring information reconstitution, while also playing a secondary role in interactional synchrony during face-to-face dialogues (as they are presented during listening roles). This dual-function framework helps reconcile seemingly contradictory findings by acknowledging that these behaviors can serve different purposes depending on situational demands. If some adaptors are related to cognitive functions, as like other speech-gestures, it would also help explain their relationship with representational gestures (Nicoladis et al., 2022).
Thus, the existing classification of adaptors as non-speech gestures might need to be revised. In a functional categorization, many of these gestures might fall under the pragmative category (Lopez-Ozieblo, 2020) with “interactive” (managing the floor) or “metadiscursive” functions (reflecting how the information is being reconstituted). Some adaptors might be providing information about the speaker’s emotional state, their stance, or modal meaning, indicating their affective state or judgment in relation to what is being said. These are categorized as pragmatic gestures under the “cognitive” category, which refers to gestures that highlight the relationship between the context and the speaker’s attitude or perspective. Future research should look at the emotional valence of adaptors and their value to the interlocutor. Additional research is also necessary to identify specific semi-conventionalized gestures (such as that in Figure 1 to indicate “I am thinking”), which might need to be reclassified as emblems or representational iconic, depending on the context and speech.
From a practical perspective, since participants in both the face-to-face and online conditions could observe each other’s hand movements, the question arises as to why differences in flutter gestures emerged between the two conditions. The rate of other adaptors was similar in both, confirming their internal cognitive function rather than a communicative one, but the duration of flutters was longer online than F2F while both speaking and listening.
This visibility paradox suggests that psychological immediacy (Mehrabian, 1966) might matter just as much as physical visibility for certain gesture types, particularly those serving interactive functions, such as providing backchannel cues. One possibility is that flutters may serve as compensatory behaviors for the increased interactional strain characteristic of digital communication, where reduced feedback signals and latency issues disrupt normal turn-taking rhythms (see Lopez-Ozieblo & Kosmala, 2025). Flutters’ dual appearance as both signs of communicative interaction (in online dialogues) and focus maintenance (in interpreters’ micro-movements; Cienki, 2025) may represent endpoints on a continuum from compensatory behaviors during disengagement (Seli et al., 2014) to energy channeling during intense concentration (Barroso et al., 1980).
Although not tested here, individual differences provide additional insight into these mechanisms. The differing nature of flutters and other adaptors might explain contradictory results in neuroticism adaptor studies (e.g., Campbell & Rushton, 1978; Cuñado Yuste, 2017). The modest but significant correlation between neuroticism and flutter duration found by Lopez-Ozieblo (2024) might be explained by the behavior of anxious individuals who suppress their gestures to hide their anxiousness but produce micro-movements as cognitive load accumulates. The observed age-related decrease in flutters may reflect either the well-documented decline in neuroticism across adulthood (McCrae et al., 1999) or an increase in gesture control developed through professional socialization.
The dual-function model proposed here—positioning other adaptors as cognitive regulators and flutters as interactive/attentional signals—has broad applicability in a series of fields such as communication training, assessment, and gesture studies. In communication training, recognizing flutters as potential indicators of interaction difficulty could enhance remote communication protocols, while other adaptor awareness may aid speech preparation strategies in second language teaching or clinical scenarios. In assessment scenarios, flutter patterns could supplement engagement metrics in education and telehealth. Both flutters and other adaptors might be valuable cues for avatars in virtual reality (VR) scenarios to “read” the player’s state of mind (e.g., engaged, frustrated) and provide directions accordingly. In gesture studies the model provides an alternative to duration-based categories (Freedman, 1972), focusing instead on a functional classification accounting for synchrony (e.g., prosody-aligned micro-movements) and context.
Several limitations ought to be considered when interpreting these results. For a start, these are second language speakers who might not display the same gestural behavior as fully fluent speakers. In these speakers, it is likely that the production of certain gestures might be increased by the additional cognitive load of having to communicate in a second language. This should have benefited our study by making patterns more marked.
Other limitations to be noted: The modest sample sizes for some comparisons, particularly in session analyses and modality-by-task examinations, may have obscured smaller effects. We have erred on the side of caution by applying corrections whenever possible. The monologue and dialogue tasks differed substantially in cognitive demands, making direct comparisons challenging. Some of our participants were familiar with each other and this might have affected their interactions. Future research should incorporate physiological measures to better distinguish arousal from anxiety effects, which we were not able to measure.

6. Conclusions

This study fundamentally repositions adaptors as complementary rather than overlapping phenomena—flutters negotiating interactive demands and attention maintenance and other adaptors anchoring the speaker cognitively during demanding verbal tasks. By demonstrating their distinct response patterns across contexts, modalities, and individual differences, we offer a refined theoretical lens for understanding how these subtle yet significant behaviors mediate our cognitive and interactive experiences in human communication.
These findings carry important implications for both theory and practice. Theoretically, they argue for moving beyond duration-based gesture classifications (Freedman, 1972) toward functional models that account for contextual and task differences. The three-second rule for classifying adaptors proves particularly inadequate for capturing the continuum from discrete “thinking gestures” to continuous, prosody-aligned movements and flutters. Practically, recognizing flutters as potential indicators of interaction difficulty could inform communication training, particularly in online contexts.
Patterns of other adaptors might similarly help assess cognitive load in educational or clinical settings. The dissociation between these behaviors also suggests the need for more nuanced coding systems in interaction research that account for their distinct functional roles.
Further studies (underway) need to delve into the differences between listening and speaking adaptors; into the potential role of “thinking gestures” as iconic or emblems rather than adaptor gestures; and into the other noted functions of flutters/other adaptors to mark segment boundaries and turn taking.
This study does not aim to provide an absolute categorization of every self-touching movement as either cognitive or interactive and exclude movements related to bodily or environmental needs or signals of stress or anxiety. Rather, it seeks to highlight categories that can be meaningfully linked to the communicative act, shedding light on their potential functions.

Funding

The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (project no. PolyU 15600320) and by the Faculty of Humanities of the Hong Kong Polytechnic University, Faculty Reserve funding project no. A0048603.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of The Hong Kong Polytechnic University (protocol code HSEARS20170227008-1; -2; -3; -4; -5, and HSEARS20250128006, approved on 22-Mar-2018, 03-Sep-2019, 10-Dec-2020; 24-Jun-2022; 24-Jul-2023, and 28-Jan-2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original data presented in the study are openly available in OSF at https://osf.io/q9cx5/files/osfstorage.

Acknowledgments

This study would not have been possible without the help of the participants in the various studies conducted. Thank you, your contribution has been invaluable. Thank you to out transcribers, technical team and annotators, Gladys Dung and James Britton. Thank you also to all the student helpers who assisted in this project: Jessy, H.C. and Chan Hoi Ying and to Loulou Kosmala for her input based on our work on interactive gestures.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Table A1. Summary of results.
Table A1. Summary of results.
Flutters in Dialogues (Speaking Role)NMeanSDMedianIQR_LowerIQR_UpperMann–Whitney Up-ValueEffect SizeCI_LowerCI_Upper
Women270.1710.1490.1370.0810.2181810.8870.02510.00540.37
Men 130.1770.1600.1030.0600.265
Adaptors in dialogues (speaking role) Mann–Whitney Up-valueEffect SizeCI_LowerCI_Upper
Women271.7961.9261.2250.4492.5711850.7950.04340.00450.34
Men131.2800.7571.0750.6211.966
Flutters in monologuesNMeanSDMedianIQR_LowerIQR_UpperMann–Whitney Up-valueEffect SizeCI_LowerCI_Upper
Women260.1380.1760.0730.0140.179148.50.5510.09780.00470.42
Men130.1200.0890.1060.0440.201
Other adaptors in monologues Mann–Whitney Up-valueEffect SizeCI_LowerCI_Upper
Women262.9852.3242.6671.1674.1541760.8480.03300.0050.35
Men132.5471.5862.8571.7683.476
Proficiency NMeanSDMedianIQR_LowerIQR_UpperMann–Whitney Up-valueEffect SizeCI_LowerCI_Upper
Online2085.9278.21385.60078.53390.000201.50.9780.006416390.00440.35
F2F2086.02710.06083.03379.31792.100
Speech rate in dialogues NMeanSDMedianIQR_LowerIQR_UpperMann–Whitney Up-valueEffect SizeCI_LowerCI_Upper
online20137.71625.363139.162130.671151.8432290.4450.1240.0060.44
F2F20136.64728.633127.304120.381152.549
Speech rate in monologuesNMeanSDMedianIQR_LowerIQR_UpperMann–Whitney Up-valueEffect SizeCI_LowerCI_Upper
online19133.07220.987132.171127.369147.6701930.9450.0130.0050.37
F2F20136.52129.684132.503115.822146.476
Flutters in online monologues by session NMeanSDMedianIQR_LowerIQR_UpperWilcoxon signed-rankp-valueEffect SizeCI_LowerCI_Upper
Session 1190.0990.1140.0720.0200.132540.1030.368−0.0530.684
Session 2190.1930.2860.0950.0570.204
Other adaptors in online monologues by session NMeanSDMedianIQR_LowerIQR_UpperWilcoxon signed-rankp-valueEffect SizeCI_LowerCI_Upper
Session 1192.2491.8631.8350.7692.9111260.082−0.421−0.7890
Session 2191.8612.4600.8000.2522.122
Other adaptors by 100 words and flutter duration By task
Other adaptors in monologuesNMeanSDMedianIQR_LowerIQR_UpperMann–Whitney Up-valueadjusted pEffect SizeCI_Lower
Online192.2491.8631.8350.7692.9111230.06120.18360.3010.03
F2F203.4002.1943.0852.3574.511
Other adaptors in dialoguesNMeanSDMedianIQR_LowerIQR_UpperMann–Whitney Up-value Effect SizeCI_Lower
Online201.1901.1000.9670.4181.5821400.1070.2140.2570.01
F2F202.0671.9871.5980.7232.505
Flutters in monologuesNMeanSDMedianIQR_LowerIQR_UpperMann–Whitney Up-value EffectSizeCI_Lower
Online190.0990.1140.0720.0200.1321410.1730.2140.2200.0091
F2F200.1640.1770.1180.0220.211
Flutters in dialoguesNMeanSDMedianIQR_LowerIQR_UpperMann–Whitney Up-value Effect SizeCI_Lower
Online200.2270.1610.1650.1250.2863040.0040.017080.4450.15
F2F200.1190.1200.0610.0350.179
Other adaptors per 100 words and flutter duration By role
Other adaptors while speakingNMeanSDMedian Mann–Whitney Up-valueadjusted pEffect SizeCI_Lower
online200.9670.4181.58 1400.1070.1070.2570.02
F2F201.5980.7232.505
Other adaptors while listeningNMeanSDMedian Mann–Whitney Up-valueadjusted pEffect SizeCI_Lower
online200.48800.952 1050.0100.03090.4080.12
F2F201.5690.5442.295
Flutters while speaking NMeanSDMedian Mann–Whitney Up-valueadjusted pEffect SizeCI_Lower
online200.1650.1250.286 3040.0040.017080.4450.15
F2F200.0610.0350.179
Flutters while listeningNMeanSDMedian Mann–Whitney Up-valueadjusted pEffect SizeCI_Lower
online200.3770.1930.649 2750.0430.0860.3210.04
F2F200.2420.0810.329
All data
Flutter duration per speech durationNMeanSDMedianIQR_LowerIQR_UpperWilcoxon signed-rankp-valuep_adjustedEffect Size_rCI_Lower
Dialogue390.1580.1210.1330.0610.2285090.0980.2950.265−0.590
Monologue390.1320.1510.0780.0210.194
Other adaptors per 100 wordsNMeanSDMedianIQR_LowerIQR_UpperWilcoxon signed-rankp-valuep_adjustedEffect Size_rCI_Lower
Dialogue391.6701.6461.2130.5692.1081980.0080.0380.428−0.026
Monologue392.8392.0962.6791.1763.501
Production by task and context
Flutter duration per speech durationonline Wilcoxon signed-rankp-valuep_adjustedEffect Size_rCI_Lower
Dialogue190.2000.1090.1590.1240.2581660.0050.0270.651−1
Monologue190.0990.1140.0720.0200.132
Other adaptors per 100 wordsonline Wilcoxon signed-rankp-valuep_adjustedEffect Size_rCI_Lower
Dialogue191.2521.0931.2050.4491.587550.1120.2950.365−0.263
Monologue192.2491.8631.8350.7692.911
Flutter duration per speech durationF2F Wilcoxon signed-rankp-valuep_adjustedEffect Size_rCI_Lower
Dialogue200.1190.1200.0610.0350.179850.4670.4670.163−0.3
Monologue200.1640.1770.1180.0220.211
Other adaptors per 100 wordsF2F Wilcoxon signed-rankp-valuep_adjustedEffect Size_rCI_Lower
Dialogue202.0671.9871.5980.7232.505500.0420.1680.4550
Monologue203.4002.1943.0852.3574.511

References

  1. Aslan, Z., Özer, D., & Göksun, T. (2024). Exploring emotions through co-speech gestures: The caveats and new directions. Emotion Review, 16(4), 265–275. [Google Scholar] [CrossRef]
  2. Barroso, F., Freedman, N., & Grand, S. (1980). Self-touching, performance, and attentional processes. Perceptual and Motor Skills, 50(Suppl. 3), 1083–1089. [Google Scholar] [CrossRef]
  3. Boomer, D. S., & Dittmann, A. T. (1962). Hesitation pauses and juncture pauses in speech. Language and Speech, 5(4), 215–220. [Google Scholar] [CrossRef]
  4. Campbell, A., & Rushton, J. P. (1978). Bodily communication and personality. British Journal of Social and Clinical Psychology, 17(1), 31–36. [Google Scholar] [CrossRef]
  5. Canarslan, F., & Chu, M. (2024). Individual differences in representational gesture production are associated with cognitive and empathy skills. Quarterly Journal of Experimental Psychology, 78(1), 17470218241245831. [Google Scholar] [CrossRef]
  6. Caplan, S. E. (2007). Relations among loneliness, social anxiety, and problematic Internet use. CyberPsychology & Behavior, 10(2), 234–242. [Google Scholar] [CrossRef] [PubMed]
  7. Carriere, J. S., Seli, P., & Smilek, D. (2013). Wandering in both mind and body: Individual differences in mind wandering and inattention predict fidgeting. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale, 67(1), 19. [Google Scholar] [CrossRef] [PubMed]
  8. Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception–behavior link and social interaction. Journal of Personality and Social Psychology, 76(6), 893. [Google Scholar] [CrossRef] [PubMed]
  9. Chui, K., Lee, C. Y., Yeh, K., & Chao, P. C. (2018). Semantic processing of self-adaptors, emblems, and iconic gestures: An ERP study. Journal of Neurolinguistics, 47, 105–122. [Google Scholar] [CrossRef]
  10. Cienki, A. (2024). Self-focused versus dialogic features of gesturing during simultaneous interpreting. Russian Journal of Linguistics, 28(2), 227–242. [Google Scholar] [CrossRef]
  11. Cienki, A. (2025). Functions of gestures during disfluent and fluent speech in simultaneous interpreting. Parallèles, 37(1), 29–46. [Google Scholar]
  12. Clough, S., & Duff, M. C. (2020). The role of gesture in communication and cognition: Implications for understanding and treating neurogenic communication disorders. Frontiers in Human Neuroscience, 14, 323. [Google Scholar] [CrossRef]
  13. Council of Europe. Council for Cultural Co-operation. Education Committee. Modern Languages Division. (2001). Common European framework of reference for languages: Learning, teaching, assessment. Cambridge University Press. [Google Scholar]
  14. Cuñado Yuste, Á. (2017). Relación entre Rasgos de Personalidad y Gestos: ¿expresamos lo que somos? Behavior & Law Journal (Online), 3(1), 35–41. [Google Scholar] [CrossRef]
  15. Edelmann, R. J., & Hampson, S. E. (1979). Changes in non-verbal behaviour during embarrassment. British Journal of Social and Clinical Psychology, 18(4), 385–390. [Google Scholar] [CrossRef]
  16. Ekman, P., & Friesen, W. V. (1969). The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica, 1(1), 49–98. [Google Scholar] [CrossRef]
  17. Ekman, P., & Friesen, W. V. (1972). Hand movements. Journal of Communication, 22(4), 353–374. [Google Scholar] [CrossRef]
  18. Frances, S. J. (1979). Sex differences in nonverbal behavior. Sex Roles, 5(4), 519–535. [Google Scholar] [CrossRef]
  19. Freedman, N. (1972). The analysis of movement behavior during the clinical interview. Studies in Dyadic Communication, 7, 153–175. [Google Scholar]
  20. Freedman, N. (1977). Hands, words, and mind. In N. Freedman, & S. Grand (Eds.), Communicative structures and psychic structures (pp. 109–132). Springer. [Google Scholar]
  21. Freedman, N., & Hoffman, S. P. (1967). Kinetic behavior in altered clinical states: Approach to objective analysis of motor behavior during clinical interviews. Perceptual and Motor Skills, 24(2), 527–539. [Google Scholar] [CrossRef] [PubMed]
  22. Genova, B. K. L. (1974). A view on the function of self-adaptors and their communication consequences. ERIC Clearinghouse.
  23. Germana, J. (1969). Effects of behavioral responding on skin conductance level. Psychological Reports, 24(2), 599–605. [Google Scholar] [CrossRef] [PubMed]
  24. Graziano, M., & Gullberg, M. (2024). Providing evidence for a well-worn stereotype: Italians and Swedes do gesture differently. Frontiers in Communication, 9, 1314120. [Google Scholar] [CrossRef]
  25. Guo, X. H., Goldin-Meadow, S., & Bainbridge, W. A. (2024). What makes co-speech gestures memorable? Available online: https://osf.io/7shx2/download (accessed on 3 September 2025).
  26. Harrigan, J. A. (1985). Self-touching as an indicator of underlying affect and language processes. Social Science & Medicine, 20(11), 1161–1168. [Google Scholar] [CrossRef]
  27. Harrigan, J. A., Kues, J. R., Steffen, J. J., & Rosenthal, R. (1987). Self-touching and impressions of others. Personality and Social Psychology Bulletin, 13(4), 497–512. [Google Scholar] [CrossRef]
  28. Heaven, L., & McBrayer, D. (2000). External motivators of self-touching behavior. Perceptual and Motor Skills, 90(1), 338–342. [Google Scholar] [CrossRef]
  29. Ishioh, T., & Koda, T. (2016, October 4–7). Cross-cultural study of perception and acceptance of Japanese self-adaptors. Fourth International Conference on Human Agent Interaction (pp. 71–74), Singapore. [Google Scholar]
  30. Kendon, A. (2004). Gesture. Cambridge University Press. [Google Scholar]
  31. Kimura, D. (1976). The neural basis of language qua gesture. In Studies in neurolinguistics (pp. 145–156). Academic Press. [Google Scholar]
  32. Koda, T., & Mori, Y. (2014). Effects of an agent’s displaying self-adaptors during a serious conversation. In Intelligent Virtual Agents: 14th International Conference, IVA 2014, Boston, MA, USA, August 27–29, 2014. Proceedings 14 (pp. 240–249). Springer International Publishing. [Google Scholar]
  33. Kosmala, L. (2024). Chapter 6. On the relationship between inter-(dis)fluency and gesture. In Beyond disfluency: The interplay of speech, gesture, and interaction (pp. 191–215). John Benjamins Publishing Company. [Google Scholar] [CrossRef]
  34. LeCompte, W. A. (1981). The ecology of anxiety: Situational stress and rate of self-stimulation in Turkey. Journal of Personality and Social Psychology, 40(4), 712. [Google Scholar] [CrossRef]
  35. Li, G. (2023). Adaptors do more than indicate emotional distress: Can they be discourse markers? [Master’s dissertation, The Hong Kong Polytechnic University]. [Google Scholar]
  36. Li, H. (2025). Higher empathy predicts more manual pointing in Tibetan people. Gesture, 23(1–2), 45–63. [Google Scholar] [CrossRef]
  37. Lin, W., Orton, I., Li, Q., Pavarini, G., & Mahmoud, M. (2021). Looking at the body: Automatic analysis of body gestures and self-adaptors in psychological distress. IEEE Transactions on Affective Computing, 14, 1175–1187. [Google Scholar] [CrossRef]
  38. Lopez-Ozieblo, R. (2020). Proposing a revised functional classification of pragmatic gestures. Lingua, 247, 102870. [Google Scholar] [CrossRef]
  39. Lopez-Ozieblo, R. (2024). Is personality reflected in the gestures of second language speakers? Frontiers in Psychology, 15, 1463063. [Google Scholar] [CrossRef] [PubMed]
  40. Lopez-Ozieblo, R., & Kosmala, L. (2025, July 9–11). Gesture use and interactional dynamics in L2 speakers: A comparative study of face-to-face and online interactions. 10th International Society for Gesture Studies 2025, Nijmegen, The Netherlands. [Google Scholar]
  41. Maeda, S. (2023). No differential responsiveness to face-to-face communication and video call in individuals with elevated social anxiety. Journal of Affective Disorders Reports, 11, 100467. [Google Scholar] [CrossRef]
  42. Mahl, G. F. (1968). Gestures and Body Movements in Interviews. In Research in psychotherapy. American Psychological Association. [Google Scholar]
  43. Mahmoud, M., Morency, L. P., & Robinson, P. (2013, December 9–13). Automatic multimodal descriptors of rhythmic body movement. 15th ACM on International Conference on Multimodal Interaction (pp. 429–436), Sydney, Australia. [Google Scholar]
  44. Maricchiolo, F., Gnisci, A., & Bonaiuto, M. (2012). Coding hand gestures: A reliable taxonomy and a multi-media support. In Cognitive behavioural systems: COST 2102 international training school, Dresden, Germany, February 21–26, 2011, revised selected papers (pp. 405–416). Springer. [Google Scholar]
  45. McCrae, R. R., Costa, P. T., de Lima, M. P., Simões, A., Ostendorf, F., Angleitner, A., Marušić, I., Bratko, D., Caprara, G. V., Barbaranelli, C., Chae, J.-H., & Piedmont, R. L. (1999). Age differences in personality across the adult life span: Parallels in five cultures. Developmental Psychology, 35(2), 466–477. [Google Scholar] [CrossRef]
  46. McNeill, D. (2008). Gesture and thought. University of Chicago Press. [Google Scholar]
  47. Mehrabian, A. (1966). Immediacy: An indicator of attitudes in linguistic communication. Journal of Personality, 34(1), 26–34. [Google Scholar] [CrossRef]
  48. Mehrabian, A., & Friedman, S. L. (1986). An analysis of fidgeting and associated individual differences. Journal of Personality, 54(2), 406–429. [Google Scholar] [CrossRef]
  49. Mohiyeddini, C., Bauer, S., & Semple, S. (2015). Neuroticism and stress: The role of displacement behavior. Anxiety, Stress, & Coping, 28(4), 391–407. [Google Scholar] [CrossRef]
  50. Mohiyeddini, C., & Semple, S. (2013). Displacement behaviour regulates the experience of stress in men. Stress, 16(2), 163–171. [Google Scholar] [CrossRef]
  51. Morency, L. P., Christoudias, C. M., & Darrell, T. (2006, November 2–4). Recognizing gaze aversion gestures in embodied conversational discourse. 8th International Conference on Multimodal Interfaces (pp. 287–294), Banff, AB, Canada. [Google Scholar] [CrossRef]
  52. Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet Rasch measurement: Part I. Journal of Applied Measurement, 4(4), 386–422. [Google Scholar]
  53. Nicoladis, E., Aneja, A., Sidhu, J., & Dhanoa, A. (2022). Is there a correlation between the use of representational gestures and self-adaptors? Journal of Nonverbal Behavior, 46(3), 269–280. [Google Scholar] [CrossRef]
  54. Pang, H. T., Canarslan, F., & Chu, M. (2022). Individual differences in conversational self-touch frequency correlate with state anxiety. Journal of Nonverbal Behavior, 46(3), 299–319. [Google Scholar] [CrossRef]
  55. Pierce, T. (2009). Social anxiety and technology: Face-to-face communication versus technological communication among teens. Computers in Human Behavior, 25(6), 1367–1372. [Google Scholar] [CrossRef]
  56. Rafieifar, M., Hanbidge, A. S., Lorenzini, S. B., & Macgowan, M. J. (2024). Comparative efficacy of online vs. face-to-face group interventions: A systematic review. Research on Social Work Practice, 35(5), 524–542. [Google Scholar] [CrossRef]
  57. Ricciardi, O., Maggi, P., & Nocera, F. D. (2019). Boredom makes me ‘nervous’: Fidgeting as a strategy for contrasting the lack of variety. International Journal of Human Factors and Ergonomics, 6(3), 195–207. [Google Scholar] [CrossRef]
  58. Rosenfeld, H. M. (1966). Instrumental affiliative functions of facial and gestural expressions. Journal of Personality and Social Psychology, 4(1), 65. [Google Scholar] [CrossRef]
  59. Sahlender, M., & ten Hagen, I. (2023). Do teachers adapt their gestures in linguistically heterogeneous second language teaching to learners’ language proficiencies? Gesture, 22(2), 189–226. [Google Scholar] [CrossRef]
  60. Scherer, S., Stratou, G., Mahmoud, M., Boberg, J., Gratch, J., Rizzo, A., & Morency, L. P. (2013, April 22–26). Automatic behavior descriptors for psychological disorder analysis. 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) (pp. 1–8), Shanghai, China. [Google Scholar]
  61. Seli, P., Carriere, J. S. A., Thomson, D. R., Cheyne, J. A., Martens, K. A. E., & Smilek, D. (2014). Restless mind, restless body. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(3), 660–668. [Google Scholar] [CrossRef]
  62. Skehan, P., Xiaoyue, B., Qian, L., & Wang, Z. (2012). The task is not enough: Processing approaches to task-based performance. Language Teaching Research, 16(2), 170–187. [Google Scholar] [CrossRef]
  63. Skogmyr Marian, K., & Pekarek Doehler, S. (2022). Multimodal word-search trajectories in L2 interaction. Social Interaction. Video-Based Studies of Human Sociality, 5(1). [Google Scholar] [CrossRef]
  64. Sloetjes, H. (2017). ELAN (Version 5.0.0) [Computer program]. Available online: https://archive.mpi.nl/tla/elan (accessed on 3 September 2025).
  65. Spille, J. L., Grunwald, M., Martin, S., & Mueller, S. M. (2022). The suppression of spontaneous face touch and resulting consequences on memory performance of high and low self-touching individuals. Scientific Reports, 12, 8637. [Google Scholar] [CrossRef] [PubMed]
  66. Stam, G. (2001). Lexical failure and gesture in second language development [Doctoral dissertation, National St Louis University]. [Google Scholar]
  67. Waxer, P. H. (1977). Nonverbal cues for anxiety: An examination of emotional leakage. Journal of Abnormal Psychology, 86(3), 306. [Google Scholar] [CrossRef]
  68. Wray, C., Saunders, N., McGuire, R., Cousins, G., & Norbury, C. F. (2017). Gesture production in language impairment: It’s quality, not quantity, that matters. Journal of Speech, Language, and Hearing Research, 60(4), 969–982. [Google Scholar] [CrossRef]
  69. Żywiczyński, P., Wacewicz, S., & Orzechowski, S. (2017). Adaptors and the turn-taking mechanism: The distribution of adaptors relative to turn borders in dyadic conversation. Interaction Studies, 18(2), 276–298. [Google Scholar] [CrossRef]
Figure 1. Adaptor signaling “I am thinking”.
Figure 1. Adaptor signaling “I am thinking”.
Languages 10 00231 g001
Figure 2. Adaptor potentially signaling “I am thinking” (gesture re-enacted by actor).
Figure 2. Adaptor potentially signaling “I am thinking” (gesture re-enacted by actor).
Languages 10 00231 g002
Figure 3. An adaptor of a fidgeting nature (gesture re-enacted by actor).
Figure 3. An adaptor of a fidgeting nature (gesture re-enacted by actor).
Languages 10 00231 g003
Figure 4. Sitting arrangement: cross-legged on bed (online) or on the floor (F2F).
Figure 4. Sitting arrangement: cross-legged on bed (online) or on the floor (F2F).
Languages 10 00231 g004
Figure 5. (a) Pushing glasses up. (b) Smoothing hair. (c) Hand to elbow (slight movement of the right hand on the left elbow up and down). (d) Pulling sleeves up (first the right and then the left). (e) Fidgeting with an external item such as a cushion.
Figure 5. (a) Pushing glasses up. (b) Smoothing hair. (c) Hand to elbow (slight movement of the right hand on the left elbow up and down). (d) Pulling sleeves up (first the right and then the left). (e) Fidgeting with an external item such as a cushion.
Languages 10 00231 g005
Figure 6. Adaptors per 100 words and flutter duration over total speaking duration (listening adaptors and flutters not included) by task and context.
Figure 6. Adaptors per 100 words and flutter duration over total speaking duration (listening adaptors and flutters not included) by task and context.
Languages 10 00231 g006
Table 1. Summary of the data (for individual data, please see https://osf.io/q9cx5/files/osfstorage, accessed on 3 September 2025).
Table 1. Summary of the data (for individual data, please see https://osf.io/q9cx5/files/osfstorage, accessed on 3 September 2025).
Total Speaking Time (mins)Total GesturesTotal no. Other AdaptorsAs a % of All GesturesTotal Flutter
Time (mins)
As a % of All
Gesture Time
OnlineDialogue58.3231583996%13.9824%
F2FDialogue50.13413641249%6.2312%
OnlineMonologue58.64818381629%5.610%
F2FMonologue52.41153723015%8.1215%
Total219.515632261510%33.9315%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lopez-Ozieblo, R. The Dual Functions of Adaptors. Languages 2025, 10, 231. https://doi.org/10.3390/languages10090231

AMA Style

Lopez-Ozieblo R. The Dual Functions of Adaptors. Languages. 2025; 10(9):231. https://doi.org/10.3390/languages10090231

Chicago/Turabian Style

Lopez-Ozieblo, Renia. 2025. "The Dual Functions of Adaptors" Languages 10, no. 9: 231. https://doi.org/10.3390/languages10090231

APA Style

Lopez-Ozieblo, R. (2025). The Dual Functions of Adaptors. Languages, 10(9), 231. https://doi.org/10.3390/languages10090231

Article Metrics

Back to TopTop