Next Article in Journal
Parietal Alpha Asymmetry as a Correlate of Internet Use Severity in Healthy Adults
Previous Article in Journal
Factors Associated with the Social Behaviour of People with Alzheimer’s Dementia: A Video Observation Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Embodied Cognition of Manipulative Actions: Subliminal Grasping Semantics Enhance Using-Action Recognition

1
School of Psychology, Shanghai University of Sport, Shanghai 200438, China
2
School of Physical Education, Leshan Normal University, Leshan 614000, China
*
Author to whom correspondence should be addressed.
Brain Sci. 2025, 15(11), 1206; https://doi.org/10.3390/brainsci15111206 (registering DOI)
Submission received: 6 October 2025 / Revised: 28 October 2025 / Accepted: 7 November 2025 / Published: 8 November 2025

Abstract

Background: Grasping actions, owing to their manipulated nature, play a central role in research on embodied action language. However, their foundational contribution to the cognition of using actions remains debated. This study examined the relationship between grasping and using actions from the perspective of subthreshold semantic processing. Methods: Participants engaged with objects affording both action types while behavioral responses and event-related potentials (ERPs) were recorded. Semantic congruency between subliminally presented grasping verbs and the actions of target objects was systematically manipulated. Results: Subthreshold processing of grasping verbs facilitated the recognition of using actions, as reflected in faster response times and modulations of ERP components. Spatiotemporal analyses revealed a processing pathway from occipital to parietal and frontal regions, with the posterior parietal cortex serving as a critical hub for integrating object function semantics with action information. Conclusions: These findings provide novel evidence that grasping action semantics support the recognition of using actions even below conscious awareness, elucidating the neural dynamics of embodied cognition and refining the temporal characterization of manipulative action processing pathways proposed by the two-action system theory.

1. Introduction

Semantic processing refers to the psychological mechanisms by which individuals comprehend and manipulate conceptual information, such as the meanings of words and symbols. It involves the extraction and integration of multimodal inputs—including objects, sounds, faces, and events—and constitutes a core cognitive function closely linked to perception, memory, and thought [1]. Depending on the level of automatization, semantic processing can be classified as either automatic or controlled. With unconscious perception long recognized as a central topic in psychology, research has increasingly demonstrated that subliminal linguistic information can undergo semantic processing [2,3]. For example, semantic priming studies have shown that presenting the word “apple” can activate the associated concept “red” [4], and masked priming experiments have confirmed that prime–target semantic similarity modulates N400 amplitudes even in the absence of conscious awareness [5].
Among these studies, action language has attracted particular attention due to its integration of perceptual, motor, and semantic information [6,7]. Recent accounts of embodied cognition propose that the processing of action language is grounded in sensorimotor systems rather than abstract symbols [6,8], with neural pathways overlapping those involved in action execution and imagery [9]. In other words, the cognitive processing pathways of action language closely resemble those of actual motor processing and are supported by the principles of embodied cognition. Consistent with this view, studies have shown that when verbs directly related to object use are presented as linguistic stimuli, the corresponding motor cortices exhibit significant activation, further demonstrating the embodied nature of action-language processing [10,11]. Within this framework, manipulative actions, owing to their unique operability, have become a primary focus of research on action semantics [12,13]. Human proficiency in tool use has provided a foundation for social progress [14]. Actions directed toward manipulated objects are defined as manipulative actions and are divided into grasping actions [15] and using actions [16] based on their functional goals. Research has shown that presenting object nouns as stimuli enhances memory performance [17]. Moreover, when object nouns are used as primes, participants exhibit significant priming effects when judging target images of hands or feet, accompanied by modulations of early ERP components and the P300 associated with target classification [18]. Similarly, studies have reported that viewing manipulated objects elicits a pronounced parietal P300 component, indicating that such objects capture attentional resources more strongly than non-manipulated ones [19]. Earlier studies further indicate that manipulability influences not only the semantic processing of action information but also serves as a key dimension in the semantic representation of object recognition [20]. Collectively, these findings highlight manipulative actions as a crucial focus within embodied linguistics.
However, the cognitive relationship between the two types of manipulative actions remains a subject of debate. Binkofski and Buxbaum proposed the two-action systems theory, defining the bilateral dorso-dorsal pathway as the structural action system and the left ventro-dorsal pathway as the functional action system. These systems are thought to correspond, respectively, to the neural pathways underlying grasping and using actions [21,22], providing theoretical support for the view that their cognitive processing operates independently. More recent evidence, however, suggests that the recognition of using actions is grounded in the cognitive processing of grasping actions [23,24,25]. For instance, studies have shown that when an object’s grip posture is congruent with its functional use (e.g., holding a knife with the palm forward), reaction times for subsequent using actions (e.g., cutting) are significantly reduced. In contrast, incongruent grip postures (e.g., holding a knife backward) interfere with the initiation of using actions [26,27]. This effect resembles the “spatial alignment effect” observed between an object’s handle orientation and the responding hand [28,29].
Electrophysiological evidence further demonstrates that functional information of manipulated objects modulates hand postures within 200 ms of initiating a grasping action [30], indicating a rapid coupling between semantic processing and motor planning. In addition, research has shown a strong correlation between the processing of action-related linguistic information and ERP component amplitudes, such as the P300 and N400. For example, when participants were asked to judge the type of action associated with target objects, smaller posterior P300 and frontal N400 amplitudes were observed when the priming objects were action-related to the targets. In contrast, primes that were only shape-related but not action-related did not modulate either the P300 or N400 amplitudes [31]. Furthermore, the two-action systems theory emphasizes the critical role of parietal–occipital regions in the neural pathways underlying manipulative action cognition. Whether the processing of action verbs elicits similar ERP components, and whether these components also exhibit activation patterns highlighting the importance of occipital–parietal areas, remains to be clarified. It also remains unclear whether such facilitative effects consistently occur under subliminal semantic processing, and how the temporal dynamics of this relationship unfold. Therefore, the present study, grounded in the framework of embodied linguistics, examined the stability of the facilitative influence of grasping-action semantics on the recognition of using actions at the subliminal level. Moreover, it investigated the temporal course of this effect, providing ERP-based evidence for embodied semantic processing of manipulative actions under subthreshold conditions.
Using the simplicity and transitivity of single-character Chinese verbs, prime stimuli were constructed to represent distinct categories of actions [32]. By varying the semantic congruency between these verbs and the actions of target objects, the study tested whether subliminal grasping-related semantics facilitated subsequent judgments of using actions. Neural responses were assessed with electroencephalography (EEG), focusing on ERP components associated with semantic processing, cognitive control, and action recognition, including the N400 [32,33], P200 [34,35], P600 [36,37], and P300 [31].

2. Materials and Methods

2.1. Participants

The required sample size was calculated using G*Power 3.1.9.2 version [38]. With an effect size of 0.25, a power of 0.80, and α = 0.05 for a two-factor repeated-measures ANOVA, the minimum required sample was 24 participants. Based on this calculation, 33 undergraduate students were recruited (15 males, 18 females; age range = 19–23 years, mean ± SD = 20.13 ± 1.47 years). Three female participants were excluded due to excessive EEG artifacts, leaving 30 participants (15 males, 15 females) for the final analysis.
All participants were right-handed, had normal or corrected-to-normal vision, a healthy BMI, and no history of neurological, muscular, or psychiatric disorders. None had extensive sports training experience or recent use of psychotropic medication. Written informed consent was obtained after participants were briefed on the study, and they received monetary compensation for their participation. The study was approved by the local ethics committee.

2.2. Stimuli

Building on previous research, grasping actions were categorized into pinch and clench types, while using actions were classified as swing and press [39]. These classifications guided the selection of object stimuli. To ensure broad and stable associations between objects and their corresponding action types, 198 participants were recruited to judge and identify the action types associated with candidate objects. Based on these ratings, eight objects were confirmed as experimental stimuli, each paired with a specific combination of grasping and using actions (see Figure 1).
Figure 1. Manipulated object stimuli.
Figure 1. Manipulated object stimuli.
Brainsci 15 01206 g001
Images of the target objects were drawn from the Bank of Standardized Stimuli (BOSS) [40]. The eight grayscale objects were presented on a 1024 × 768 CRT display at a viewing distance of 45 cm, with a refresh rate of 60 Hz, and were controlled using Psychtoolbox (beta-20190207 V3.0.15) in MATLAB (Matlab2020b.) [41,42]. Each object was displayed at a standardized orientation, with handles tilted 45° to the left, subtending a visual angle of 3.8°. Participants responded via keyboard input, and subsequent event-related potential (ERP) analyses focused on the temporal dynamics of brain activation.

2.3. Task and Procedure

The experiment consisted of two phases: an action-type testing phase and the main experimental phase. The action-type testing phase aimed to assess participants’ ability to select the appropriate combination of the two manipulative action types for the target objects, thereby confirming whether participants’ selections could serve as valid prime stimuli in the main semantic priming task. The action-type testing phase lasted approximately 20 min, while the main experimental phase lasted about 60 min.
A semantic priming paradigm was employed to investigate the processing of action verbs (Figure 2). Single-character Chinese verbs (‘捏’ for pinch and ‘握’ for clench) served as primes, presented for 33 ms and followed by a 120 ms masking screen. Participants were instructed to classify the using action of target objects (swing or press) as quickly and accurately as possible via key presses. Subsequently, a two-alternative forced-choice (2AFC) task assessed participants’ objective discrimination of the prime stimuli. Trials included randomized inter-stimulus intervals ranging from 1.5 to 2 s. The experiment included four conditions, combining cue–target semantic congruency and using action type, presented across four blocks of 128 trials, with congruent and incongruent trials evenly distributed. Target objects were balanced across the two using-action types.
Figure 2. Procedure design for the main experimental phase.
Figure 2. Procedure design for the main experimental phase.
Brainsci 15 01206 g002

2.4. EEG Data Acquisition

Continuous electroencephalogram (EEG) signals were recorded using the Brain Vision Recorder 2.0 system (Brain Products, Gilching, Germany) with a 64-channel Easy-Cap arranged according to the international 10–20 system. The FCz electrode served as the reference, and AFz as the ground. Vertical electrooculogram (VEOG) recordings were obtained to allow offline correction of eye-movement artifacts. EEG signals were band-pass filtered between 0.01 and 100 Hz and digitized at 500 Hz using a BrainAmp amplifier. Electrode impedances were maintained below 5 kΩ throughout the recording.

2.5. EEG Data Analysis

Offline EEG data analysis was performed using the EEGLAB toolbox in MATLAB [43,44]. Independent component analysis (ICA) was applied to attenuate electrooculographic (EOG) artifacts [45]. EEG data were epoched from 200 ms before cue stimulus onset to 2000 ms after target stimulus onset. Trials containing significant artifacts or voltage fluctuations exceeding ±80 µV were excluded. A low-pass filter at 30 Hz was applied, with zeroing at the onset of the target stimulus and baseline correction relative to the 200 ms preceding cue stimulus onset.
Based on previous research and the grand-average ERP waveforms and topographies from the current experiment, four ERP components and regions of interest [46] were defined [46]. For the P200 component [34,35], the time window was 170–220 ms, and the electrodes of interest included P1, Pz, P2, PO3, POz, and PO4; the mean amplitude across these electrodes was calculated. For the P300 component [31], the time window was 280–330 ms, using the same set of electrodes, and the mean amplitude was computed. For the N400 component [47,48], the time window was 320–420 ms, and the electrodes of interest were FC1, FCz, FC2, C1, Cz, and C2; the mean amplitude across these electrodes was calculated. Finally, for the P600 component [36,37], the time window was 480–580 ms, with electrodes P1, Pz, P2, PO3, POz, and PO4; the mean amplitude was computed across these electrodes.
ERP amplitudes were analyzed using repeated-measures ANOVA in SPSS 20.0. Only trials in which participants correctly identified the object’s action type were included; incorrect trials were excluded. Following EEG preprocessing, mean amplitudes were computed for the relevant electrodes under each experimental condition using a 2 (cue–target congruency: congruent vs. incongruent) × 2 (using action type: swing vs. press) design. The number of trials contributing to the ERP analysis in each condition was as follows: 111 for congruent/swing, 109 for congruent/press, 110 for incongruent/swing, and 107 for incongruent/press.

2.6. Statistical Methods

To ensure that participants did not have conscious perception of the prime verbs, we employed the four-point Perceptual Awareness Scale (PAS). After each trial, participants reported their awareness of the cue stimulus on a scale from 1 (completely invisible) to 4 (very clear). Trials for which participants reported PAS = 1 were defined as unconscious trials and used for subsequent analyses. In addition, participants’ objective discrimination ability was assessed using one-sample t-tests, comparing accuracy, d′, and β values against chance levels (accuracy: 50%; d′: 0; β: 1) to confirm that performance did not exceed chance.
Priming effects were analyzed behaviorally in terms of reaction times and accuracy, and electrophysiologically in terms of mean ERP amplitudes, using a 2 (cue–target semantic congruency: congruent, incongruent) × 2 (using action type: swing, press) repeated-measures ANOVA. Data analysis was conducted in SPSS 20.0. For multiple comparisons, Bonferroni correction was applied. Descriptive statistics are reported as means ± standard errors (SEM). Trials with accuracy below 75% or reaction times exceeding three standard deviations from the mean were excluded.
Mean amplitudes of ERP components for each condition were calculated using scripts in MATLAB. The main statistical analysis focused on predefined ROIs and component-specific time windows. Accordingly, the scalp topographies presented in the manuscript are descriptive visualizations intended to illustrate the distribution of EEG activity under each condition; no statistical comparisons were conducted across conditions for the topographic maps.

3. Results

3.1. Behavior

Results from the forced-choice task indicated that participants’ accuracy in selecting the priming stimulus text (mean ± SEM = 50.29 ± 0.37%) was comparable to the error rate (mean ± SEM = 49.71 ± 0.40%), both at chance level, with no significant difference between them (paired t-test, t(32) = 0.251, p = 0.804). This objectively demonstrates that participants were unable to perceive the priming stimuli at the level of visual awareness.
Additionally, we analyzed the discriminability index (d′) and likelihood ratio (β) for participants’ selections of the priming stimuli. The mean ± SEM for d′ was 0.01 ± 0.02, showing no significant deviation from zero (paired t-test, t(32) = 0.249, p = 0.805), and the mean ± SEM for β was 1.00 ± 0.0002, not significantly different from 1 (paired t-test, t(32) = 0.916, p = 0.366). These results collectively indicate that participants were entirely unable to discriminate or consciously detect the presence of the priming stimuli (Table 1).
Table 1. The results for objective discrimination.
Table 1. The results for objective discrimination.
MeanSEMt(32)p
ACC50.29%0.370.2510.840
ER49.71%0.40
Dprime0.010.020.2490.850
Accuracy and reaction times for object recognition and using actions were analyzed using a two-way repeated-measures ANOVA with the factors semantic congruency (congruent vs. incongruent) and using action type (swing vs. press). Reaction times were significantly faster in congruent trials (congruent: mean ± SEM = 914.60 ± 20.22 ms; incongruent: mean ± SEM = 933.67 ± 22.92 ms, F(1, 32) = 8.480, p = 0.006, η2p = 0.209). A significant main effect of action type was also observed (F(1, 32) = 6.616, p = 0.015, η2p = 0.171), with slower responses for press actions compared to swing actions (swing: Mean ± SEM = 933.09 ± 22.69 ms; press: Mean ± SEM = 915.18 ± 20.52 ms). In contrast, accuracy showed no significant effects of semantic congruency (F(1, 32) = 0.575, p = 0.454, η2p = 0.018) or action type (F(1, 32) = 2.602, p = 0.117, η2p = 0.075) (Figure 3).
Figure 3. Behavioral results. ns: p > 0.05; *: p < 0.05; **: p < 0.01.
Figure 3. Behavioral results. ns: p > 0.05; *: p < 0.05; **: p < 0.01.
Brainsci 15 01206 g003

3.2. Electrophysiology Components of the Subliminal Priming Task

We conducted a repeated-measures ANOVA on the average amplitude of the P200 component with the factors cue–target stimulus semantic congruency (congruent/incongruent) and using action type (swing/press). This revealed a significant main effect of cue–target stimulus semantic congruency, with smaller amplitudes in congruent trials (congruent: mean ± SEM = 6.421 ± 0.48 μV; incongruent: mean ± SEM = 6.954 ± 0.49 μV, F(1, 29) = 11.969, p = 0.002, η2p = 0.292) (Figure 4). No significant main effect of action type was observed (F(1, 29) = 3.051, p = 0.091, η2p = 0.095) and the interaction was not significant (F(1, 29) = 0.066, p = 0.799, η2p = 0.002).
Figure 4. Waveforms and topographical maps for the P200 component.
Figure 4. Waveforms and topographical maps for the P200 component.
Brainsci 15 01206 g004
A repeated-measures ANOVA on the N400 component revealed a significant main effect of cue–target stimulus semantic congruency: incongruent conditions elicited larger N400 amplitudes than congruent conditions (congruent, mean ± SEM = −2.354 ± 0.23 μV; incongruent, mean ± SEM = −2.653 ± 0.26 μV, F(1, 29) = 8.865, p = 0.006, η2p = 0.234) (Figure 5). No other significant main effects or interactions were observed (using action type, F(1, 29) = 1.088, p = 0.306, η2p = 0.036; interactions, F(1, 29) = 0.572, p = 0.456, η2p = 0.019).
Figure 5. Waveforms and topographical maps for the N400 component.
Figure 5. Waveforms and topographical maps for the N400 component.
Brainsci 15 01206 g005
For the P300 component, a repeated-measures ANOVA revealed a significant main effect of semantic congruency (F(1, 29) = 21.055, p < 0.001, η2p = 0.421), with smaller amplitudes in congruent conditions (congruent: mean ± SEM = 5.047 ± 0.21 μV; incongruent: mean ± SEM = 5.922 ± 0.29 μV) (Figure 6). There was also a significant main effect of using action type (swing: mean ± SEM = 5.652 ± 0.25 μV; press: mean ± SEM = 5.317 ± 0.24 μV, F(1, 29) = 15.425, p < 0.001, η2p = 0.347). No significant interaction was observed (F(1, 29) = 1.062, p = 0.311, η2p = 0.035).
Figure 6. Waveforms and topographical maps for the P300 component.
Figure 6. Waveforms and topographical maps for the P300 component.
Brainsci 15 01206 g006
Finally, for the P600 component, repeated-measures ANOVA revealed a significant main effect of cue–target stimulus semantic congruency: incongruent conditions elicited larger amplitudes than congruent conditions (congruent, mean ± SEM = 3.045 ± 0.25 μV; incongruent, mean ± SEM = 4.016 ± 0.29 μV, F(1, 29) = 30.588, p < 0.001, η2p = 0.513) (Figure 7). Similar to the P300, there was a significant main effect of action type (swing: mean ± SEM = 3.658 ± 0.26 μV; press: mean ± SEM = 3.404 ± 0.27 μV, F(1, 29) = 6.207, p = 0.019, η2p = 0.176), but no significant interaction was observed (F(1, 29) = 4.025, p = 0.057, η2p = 0.121).
Figure 7. Waveforms and topographical maps for the P600 component.
Figure 7. Waveforms and topographical maps for the P600 component.
Brainsci 15 01206 g007

4. Discussion

In this study, we introduced the concept of semantic processing of manipulative actions within the framework of embodied cognition in action language, specifically examining whether action verbs exhibit an embodied cognitive representation. In other words, we investigated whether the neural pathways for manipulative action recognition proposed by the two-action system theory also show overlapping activation patterns during the subthreshold semantic processing of these verbs. Object stimuli were categorized according to different types of manipulative actions, allowing us to identify a facilitatory effect of subthreshold semantic processing of grasping actions on the recognition of using actions. This finding provides empirical support at the subthreshold semantic processing level for the hypothesis that grasping actions constitute a cognitive foundation for using actions. Moreover, event-related potential (ERP) techniques were employed to examine the temporal dynamics of this semantic facilitation effect.
To ensure unconscious processing of the prime words, we conducted an objective discriminability test. Participants’ accuracy in the forced-choice task was 50.29%, and the discriminability index (d′) did not differ from zero, indicating no objective discriminative ability. These results demonstrate that participants’ selection of the prime words remained at chance level, confirming that the primes were presented below the threshold of visual awareness and providing a solid methodological basis for investigating subthreshold semantic processing of actions.
Behavioral data analysis revealed that response times in the using-action judgment task were faster when the prime and target stimuli shared semantically congruent grasping actions compared to incongruent conditions, indicating a positive priming effect of semantic congruency on the recognition of using actions. Additionally, differences in the cognitive processing of the two types of using actions—“press” and “swing”—contributed to variations in response speed, with responses for the press action being significantly faster than those for the swing action.
A similar action-type effect was observed in the amplitudes of the P300 and P600 components. The P300 component, associated with the allocation of cognitive resources and task difficulty, was significantly modulated by the linguistic information of manipulative actions. By contrast, the posterior P600 component has been linked to semantic conflict or conflict resolution [49]. These results indicate that the cognitive complexity of the two using-action types differed, leading to variations in the amplitudes elicited by each action type [50].
Furthermore, these findings are consistent with the embodied processing framework of action language, which posits that the processing of using-action representations is tightly coupled with the body and motor systems and exhibits contextual dependence [51]. For example, whether an action is performed toward or away from the body creates context-dependent differences in action processing [7]. Embodied semantics further suggests that the semantic activation of action-related words is mapped onto bodily schemata, resulting in differential representations for distinct action types within the sensorimotor system [52]. In the present experiment, swing actions—relative to press actions—represented movements directed away from the body with less contextual guidance, rendering their recognition more complex. From the perspective of embodied cognition, which emphasizes shared neural resources between lexical and action processing, this increased complexity likely required greater cognitive resources. These findings further support previous research highlighting the critical role of the parieto-occipital regions in the recognition of manipulative actions [53,54].
In addition, a main effect of semantic congruency between prime and target stimuli was observed across the P200, N400, P600, and P300 components. The N400 amplitude was significantly smaller in semantically congruent conditions compared to incongruent conditions, demonstrating that even unconscious grasping verbs can undergo successful semantic processing [55,56]. A similar pattern emerged for the P600 component, indicating that comprehension of semantic conflict under incongruent conditions requires greater resource allocation [49]. Likewise, the P300 amplitude was modulated by semantic congruency, suggesting that participants processed grasping-action information, specifically grasping verbs [31,57], which in turn facilitated the recognition of using actions [58]. Collectively, these findings indicate that semantic processing of grasping manipulative actions exerts a facilitatory effect on the cognition of using manipulative actions, even at a subthreshold level. This confirms the stability of the facilitation effect and provides further empirical support for the hypothesis that grasping actions constitute a cognitive foundation for using actions [25,26,59].
Moreover, the ERP results revealed the cognitive pathways underlying semantic processing of manipulative actions. Differences in semantic congruency first emerged in occipital regions associated with visual feature processing, then progressed to the posterior parietal cortex for decision-related updates, followed by frontal regions for deeper semantic integration, and finally reached parieto-occipital regions implicated in higher-order semantic conflict recognition. This temporal sequence aligns closely with the processing pathways proposed by the two-action system theory, emphasizing the critical role of the parieto-occipital regions and the integrative function of frontal areas [21,22]. Accordingly, our study provides a temporal-level refinement and extension of the two-action system’s proposed cognitive pathways for manipulative actions within the framework of action language processing.
Although the observed facilitation effects could also be explained by symbolic semantic priming mechanisms, the action-specific nature of the congruency modulation—particularly between grasping and using actions—suggests an underlying sensorimotor resonance consistent with embodied cognition [17,60]. It is important to note that this study only used action-related verbs as stimuli. Future research could incorporate direct sensorimotor tasks as baseline conditions to more comprehensively explore the embodied neural mechanisms underlying manipulative action semantics. The single-character Chinese verbs used, characterized by transitivity and structural simplicity, allowed precise assessment of manipulative action semantic processing [61]. The selected verbs had highly consistent structural composition, ensuring that observed effects were driven by action-specific semantics rather than structural variability. Nevertheless, the current stimuli have limitations, and future studies could include non-action primes to further dissociate action-specific from general semantic effects. It is also important to recognize that ERP components such as the P200, N400, and P600 do not index single cognitive processes in isolation. Rather, they likely reflect overlapping neural activities across distributed cortical networks. Accordingly, the observed occipital-to-frontal topographic progression should be interpreted as a temporal sequence of scalp-level activations reflecting stages of visual, semantic, and cognitive integration, rather than direct source localization. Future studies employing source reconstruction or multimodal imaging (e.g., EEG–fMRI) could help clarify the neural generators underlying these temporal dynamics.
In summary, our findings demonstrate that semantic processing of grasping actions effectively facilitates the recognition of using actions. Significant differences were also observed between different types of using actions: specifically, press actions were recognized more quickly than swing actions, with corresponding neural activity in parietal and occipital regions associated with contextual updating and fine-grained discriminative processing [53,54]. The subthreshold semantic priming effect of grasping actions on using actions was reflected behaviorally in faster action recognition and neurally in facilitative changes within dorsal brain regions. Building on the neural pathways for manipulative action recognition proposed by the two-action system theory, this study examined the temporal dynamics of subthreshold semantic processing of manipulative actions, thereby refining and extending the two-action system framework [21,22]. These findings provide indirect evidence for the embodied nature of subthreshold semantic processing of manipulative actions and carry practical significance. By highlighting the foundational role of grasping information in cognition and the importance of parietal–occipital integration, this study offers a bioinspired blueprint for developing more intuitive human–machine interfaces and robotic grasping strategies. For example, designing graspable regions of intelligent tools (corresponding to grasping actions) to naturally guide correct usage (corresponding to using actions) could enhance interaction fluency. Moreover, for patients with apraxia resulting from brain injury (e.g., stroke), difficulties often arise from an inability to translate object knowledge into appropriate actions. The ERP paradigm established here may serve as a sensitive diagnostic tool to distinguish whether deficits occur at the level of action semantics or during action selection and integration [62]. Based on this, targeted rehabilitation interventions—such as subliminal priming techniques—could subtly rebuild and strengthen impaired action–semantic pathways.

5. Conclusions

Guided by the framework of embodied linguistics, the present study demonstrated that subthreshold semantic processing of grasping actions effectively facilitates the recognition of using actions. Behavioral and ERP results jointly indicate that grasping verbs constitute a robust cognitive foundation for identifying using actions. Spatiotemporal analyses of ERP data revealed a processing pathway from occipital to parietal and then frontal regions, with the posterior parietal cortex serving as a central hub for integrating object function semantics with action information. These findings not only extend the temporal characterization of manipulative action processing pathways proposed by the two-action system theory but also provide empirical support for the embodied nature of action language processing at a subthreshold level. Overall, the study underscores the pivotal role of grasping action semantics in shaping action recognition and supports the embodied cognition framework for action language from the perspective of subthreshold manipulative action processing.

Author Contributions

Conceptualization, Y.Y. and A.L.; Data curation, Y.Y. and S.G.; Formal analysis, Y.Y.; Funding acquisition, A.L.; Methodology, Y.Y. and Q.H.; Project administration, A.L.; Visualization, Q.H.; Writing—original draft, Y.Y.; Writing—review and editing, A.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 3197070624. The funding category is general projects.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the local Ethics Committee at Shanghai University of Sport in China for studies involving humans. The approval code is 102772019R7017 and date of approval is 4 March 2019.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lambon Ralph, M.A.; Pobric, G.; Jefferies, E. Conceptual knowledge is underpinned by the temporal pole bilaterally: Convergent evidence from rTMS. Cereb. Cortex 2009, 19, 832–838. [Google Scholar] [CrossRef]
  2. Dehaene, S.; Changeux, J.-P.; Naccache, L.; Sackur, J.; Sergent, C. Conscious, preconscious, and subliminal processing: A testable taxonomy. Trends Cogn. Sci. 2006, 10, 204–211. [Google Scholar] [CrossRef] [PubMed]
  3. Dehaene, S.; Naccache, L.; Le Clec’H, G.; Koechlin, E.; Mueller, M.; Dehaene-Lambertz, G.; van de Moortele, P.-F.; Le Bihan, D. Imaging unconscious semantic priming. Nature 1998, 395, 597–600. [Google Scholar] [CrossRef] [PubMed]
  4. Collins, A.M.; Loftus, E.F. A spreading-activation theory of semantic processing. Psychol. Rev. 1975, 82, 407. [Google Scholar] [CrossRef]
  5. Ortells, J.J.; Kiefer, M.; Castillo, A.; Megías, M.; Morillas, A. The semantic origin of unconscious priming: Behavioral and event-related potential evidence during category congruency priming from strongly and weakly related masked words. Cognition 2016, 146, 143–157. [Google Scholar] [CrossRef]
  6. Bechtold, L.; Cosper, S.H.; Malyshevskaya, A.; Montefinese, M.; Morucci, P.; Niccolai, V.; Repetto, C.; Zappa, A.; Shtyrov, Y. Brain Signatures of Embodied Semantics and Language: A Consensus Paper. J. Cogn. 2023, 6, 61. [Google Scholar] [CrossRef]
  7. Dam, V. Context Effects in Embodied Lexical-Semantic Processing. Front. Psychol. 2010, 1, 2102. [Google Scholar] [CrossRef]
  8. Barsalou, L.W. Perceptual symbol systems. Behav. Brain Sci. 1999, 22, 577–609. [Google Scholar] [CrossRef]
  9. Courson, M.; Tremblay, P. Neural correlates of manual action language: Comparative review, ALE meta-analysis and ROI meta-analysis. Neurosci. Biobehav. Rev. 2020, 116, 221–238. [Google Scholar] [CrossRef]
  10. Friedemann, P. Brain Mechanisms Linking Language and Action. Nat. Rev. Neurosci. 2005, 6, 576–582. [Google Scholar] [CrossRef]
  11. Ball, L.V.; Mak, M.H.; Ryskin, R.; Curtis, A.J.; Rodd, J.M.; Gaskell, M.G. The contribution of learning and memory processes to verb-specific syntactic processing. J. Mem. Lang. 2025, 141, 104595. [Google Scholar] [CrossRef]
  12. Monaco, E.; Mouthon, M.; Britz, J.; Sato, S.; Stefanos-Yakoub, I.; Annoni, J.; Jost, L. Embodiment of action-related language in the native and a late foreign language—An fMRI-study. Brain Lang. 2023, 244, 105312. [Google Scholar] [CrossRef]
  13. Gu, L.; Jiang, J.; Han, H.; Gan, J.Q.; Wang, H. Recognition of unilateral lower limb movement based on EEG signals with ERP-PCA analysis. Neurosci. Lett. 2023, 800, 137133. [Google Scholar] [CrossRef] [PubMed]
  14. Gibson, K.R.; Ingold, T. Tools, Language and Cognition in Human Evolution; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  15. Bergstrom, F.; Wurm, M.; Valério, D.; Lingnau, A.; Almeida, J. Decoding stimuli (tool-hand) and viewpoint invariant grasp-type information. Cortex 2021, 139, 152–165. [Google Scholar] [CrossRef]
  16. Fragaszy, D.M.; Mangalam, M. Chapter Five—Tooling. In Advances in the Study of Behavior; Naguib, M., Barrett, L., Healy, S.D., Podos, J., Simmons, L.W., Zuk, M., Eds.; Academic Press: Cambridge, MA, USA, 2018; pp. 177–241. [Google Scholar]
  17. Klepp, A.; Weissler, H.; Niccolai, V.; Terhalle, A.; Geisler, H.; Schnitzler, A.; Biermann-Ruben, K. Neuromagnetic hand and foot motor sources recruited during action verb processing. Brain Lang. 2014, 128, 41–52. [Google Scholar] [CrossRef] [PubMed]
  18. Caggiano, P.; Grossi, G.; De Mattia, L.C.; Vanvelzen, J.; Cocchini, G. Objects with motor valence affect the visual processing of human body parts: Evidence from behavioural and ERP studies. Cortex 2022, 153, 194–206. [Google Scholar] [CrossRef] [PubMed]
  19. Proverbio, A.M.; Adorni, R.; D’Aniello, G.E. 250 ms to code for action affordance during observation of manipulable objects. Neuropsychologia 2011, 49, 2711–2717. [Google Scholar] [CrossRef]
  20. Campanella, F.; Shallice, T. Manipulabilit and object recognition: Is manipulability a semantic feature? Exp. Brain Res. 2011, 208, 369–383. [Google Scholar] [CrossRef]
  21. Buxbaum, L.J.; Kalenine, S.E. Action knowledge, visuomotor activation, and embodiment in the two action systems. Ann. N. Y. Acad. Sci. 2010, 1191, 201–218. [Google Scholar] [CrossRef]
  22. Binkofski, F.; Buxbaum, L.J. Two action systems in the human brain. Brain Lang. 2013, 127, 222–229. [Google Scholar] [CrossRef]
  23. Knights, E.; Mansfield, C.; Tonin, D.; Saada, J.; Smith, F.W.; Rossit, S. Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions. J. Neurosci. 2021, 41, 5263–5273. [Google Scholar] [CrossRef]
  24. Osiurak, F.; Badets, A. Tool use and affordance: Manipulation-based versus reasoning-based approaches. Psychol. Rev. 2016, 123, 534–568. [Google Scholar] [CrossRef]
  25. Errante, A.; Ziccarelli, S.; Mingolla, G.P.; Fogassi, L. Decoding grip type and action goal during the observation of reaching-grasping actions: A multivariate fMRI study. Neuroimage 2021, 243, 118511. [Google Scholar] [CrossRef] [PubMed]
  26. Brandi, M.L.; Wohlschläger, A.; Sorg, C.; Hermsdörfer, J. The Neural Correlates of Planning and Executing Actual Tool Use. J. Neurosci. 2014, 34, 13183–13194. [Google Scholar] [CrossRef] [PubMed]
  27. Bub, D.N.; Masson, M.E.J. Grasping Beer Mugs: On the Dynamics of Alignment Effects Induced by Handled Objects. J. Exp. Psychol. Hum. Percept. Perform. 2010, 36, 341–358. [Google Scholar] [CrossRef] [PubMed]
  28. Pappas, Z.; Mack, A. Potentiation of action by undetected affordant objects. Vis. Cogn. 2008, 16, 892–915. [Google Scholar] [CrossRef]
  29. Tucker, M.; Ellis, R. On the Relations Between Seen Objects and Components of Potential Actions. J. Exp. Psychol. 1998, 24, 830–846. [Google Scholar] [CrossRef]
  30. Garrido-Vasquez, P.; Wengemuth, E.; Schubo, A. Priming of grasp affordance in an ambiguous object: Evidence from ERPs, source localization, and motion tracking. Heliyon 2021, 7, e06870. [Google Scholar] [CrossRef]
  31. Lee, C.L.; Huang, H.-W.; Federmeier, K.D.; Buxbaum, L.J. Sensory and semantic activations evoked by action attributes of manipulable objects: Evidence from ERPs. NeuroImage 2018, 167, 331–341. [Google Scholar] [CrossRef]
  32. Deng, Y.; Wu, Q.; Wang, J.; Feng, L.; Xiao, Q. Event-related potentials reveal early activation of syntax information in Chinese verb processing. Neurosci. Lett. 2016, 631, 19–23. [Google Scholar] [CrossRef]
  33. Leynes, P.A.; Verma, Y.; Santos, A. Separating the FN400 and N400 event-related potential components in masked word priming. Brain Cogn. 2024, 182, 106226. [Google Scholar] [CrossRef] [PubMed]
  34. Evans, K.M.; Federmeier, K.D. The memory that’s right and the memory that’s left: Event-related potentials reveal hemispheric asymmetries in the encoding and retention of verbal information. Neuropsychologia 2007, 45, 1777–1790. [Google Scholar] [CrossRef] [PubMed]
  35. Thepsatitporn, S.; Pichitpornchai, C. Visual event-related potential studies supporting the validity of VARK learning styles’ visual and read/write learners. AJP Adv. Physiol. Educ. 2016, 40, 206–212. [Google Scholar] [CrossRef] [PubMed]
  36. Emmorey, K.; Akers, E.M.; Martinez, P.M.; Midgley, K.J.; Holcomb, P.J. Assessing sensitivity to semantic and syntactic information in deaf readers: An ERP study. Neuropsychologia 2025, 215, 109171. [Google Scholar] [CrossRef]
  37. Kim, A.E.; McKnight, S.M.; Miyake, A. How variable are the classic ERP effects during sentence processing? A systematic resampling analysis of the N400 and P600 effects. Cortex 2024, 177, 130–149. [Google Scholar] [CrossRef]
  38. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef]
  39. Buxbaum, L.J.; Kyle, K.M.; Tang, K.; Detre, J.A. Neural substrates of knowledge of hand postures for object grasping and functional object use: Evidence from fMRI. Brain Res. 2006, 1117, 175–185. [Google Scholar] [CrossRef]
  40. Brodeur, M.B.; Guérard, K.; Bouras, M. Bank of Standardized Stimuli (BOSS) Phase II 930 New photos. PLoS ONE 2014, 9, e106953. [Google Scholar] [CrossRef]
  41. Brainard, D.H.; Vision, S. The psychophysics toolbox. Spat. Vis. 1997, 10, 433–436. [Google Scholar] [CrossRef]
  42. Pelli, D.G. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spat. Vis. 1997, 10, 437–442. [Google Scholar] [CrossRef]
  43. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef]
  44. Iversen, J.R.; Makeig, S. MEG/EEG Data Analysis Using EEGLAB; Springer: Berlin/Heidelberg, Germany, 2014; Volume 1, pp. 199–212. [Google Scholar]
  45. Gratton, G.; Coles, M.G.; Donchin, E. A new method for off-line removal of ocular artifact. Electroencephalogr. Clin. Neurophysiol. 1983, 55, 468–484. [Google Scholar] [CrossRef] [PubMed]
  46. Dux, P.E.; Marois, R. The attentional blink: A review of data and theory. Atten. Percept. Psychophys. 2009, 71, 1683–1700. [Google Scholar] [CrossRef] [PubMed]
  47. Brown, C.; Hagoort, P. The Processing Nature of the N400: Evidence from Masked Priming. J. Cogn. Neurosci. 1993, 5, 34. [Google Scholar] [CrossRef] [PubMed]
  48. Dudschig, C. Language and non-linguistic cognition: Shared mechanisms and principles reflected in the N400. Biol. Psychol. 2022, 169, 108282. [Google Scholar] [CrossRef]
  49. Agustín, R.T.; González, Z.B.; Camacho, M.A.R.; Almonte, S.; Galindo, W.F.L.; Aguirre, F.A.R. Detection of semantic inconsistencies of motor actions: From language to praxis. Cogn. Syst. Res. 2024, 88, 101292. [Google Scholar] [CrossRef]
  50. Droge, A.; Fleischer, J.; Schlesewsky, M.; Bornkessel-Schlesewsky, I. Neural mechanisms of sentence comprehension based on predictive processes and decision certainty: Electrophysiological evidence from non-canonical linearizations in a flexible word order language. Brain Res. 2016, 1633, 149–166. [Google Scholar] [CrossRef]
  51. Ibanez, A.; Kühne, K.; Miklashevsky, A.; Monaco, E.; Muraki, E.; Ranzini, M.; Speed, L.J.; Tuena, C. Ecological Meanings: A Consensus Paper on Individual Differences and Contextual Influences in Embodied Language. J. Cogn. 2023, 6, 59. [Google Scholar] [CrossRef]
  52. Rueschemeyer, S.-A.; Pfeiffer, C.; Bekkering, H. Body schematics: On the role of the body schema in embodied lexical-semantic representations. Neuropsychologia 2010, 48, 774–781. [Google Scholar] [CrossRef]
  53. Rueschemeyer, S.-A.; van Rooij, D.; Lindemann, O.; Willems, R.M.; Bekkering, H. The Function of Words: Distinct Neural Correlates for Words Denoting Differently Manipulable Objects. J. Cogn. Neurosci. 2010, 22, 1844–1851. [Google Scholar] [CrossRef]
  54. Salazar-López, E.; Schwaiger, B.J.; HermsderRfer, J. Lesion correlates of impairments in actual tool use following unilateral brain damage. Neuropsychologia 2016, 84, 167–180. [Google Scholar] [CrossRef]
  55. Kiefer, M.; Brendel, D. Attentional Modulation of Unconscious ‘Automatic’ Processes: Evidence from Event-related Potentials in a Masked Priming Paradigm. J. Cogn. Neurosci. 2006, 18, 184–198. [Google Scholar] [CrossRef] [PubMed]
  56. Yang, Y.H.; Zhou, J.; Li, K.-A.; Hung, T.; Pegna, A.J.; Yeh, S.-L. Opposite ERP effects for conscious and unconscious semantic processing under continuous flash suppression. Conscious. Cogn. 2017, 54, 114–128. [Google Scholar] [CrossRef] [PubMed]
  57. Velji-Ibrahim, J.; Crawford, J.D.; Cattaneo, L.; Monaco, S. Action planning modulates the representation of object features in human fronto-parietal and occipital cortex. Eur. J. Neurosci. 2022, 56, 4803–4818. [Google Scholar] [CrossRef] [PubMed]
  58. Ghio, M.; Cassone, B.; Tettamanti, M. Unaware processing of words activates experience-derived information in conceptual-semantic brain networks. Imaging Neurosci. 2025, 3, imag_a_00484. [Google Scholar] [CrossRef]
  59. Fagg, A.H.; Arbib, M.A. Modeling parietal-premotor interactions in primate control of grasping. Neural Netw. 1998, 11, 1277–1303. [Google Scholar] [CrossRef]
  60. Kozunov, V.V.; West, T.O.; Nikolaeva, A.Y.; Stroganova, T.A.; Friston, K.J. Object recognition is enabled by an experience-dependent appraisal of visual features in the brain’s value system. Neuroimage 2020, 221, 117143. [Google Scholar] [CrossRef]
  61. McKoon, G.; Macfarland, T. Event templates in the lexical representations of verbs. Cogn. Psychol. 2002, 45, 1–44. [Google Scholar] [CrossRef]
  62. Barde, L.; Buxbaum, L.J.; Moll, A.D. Abnormal reliance on object structure in apraxics’ learning of novel object-related actions. J. Int. Neuropsychol. Soc. 2007, 13, 997–1008. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, Y.; Huang, Q.; Gao, S.; Li, A. Embodied Cognition of Manipulative Actions: Subliminal Grasping Semantics Enhance Using-Action Recognition. Brain Sci. 2025, 15, 1206. https://doi.org/10.3390/brainsci15111206

AMA Style

Yu Y, Huang Q, Gao S, Li A. Embodied Cognition of Manipulative Actions: Subliminal Grasping Semantics Enhance Using-Action Recognition. Brain Sciences. 2025; 15(11):1206. https://doi.org/10.3390/brainsci15111206

Chicago/Turabian Style

Yu, Yanglan, Qin Huang, Shiying Gao, and Anmin Li. 2025. "Embodied Cognition of Manipulative Actions: Subliminal Grasping Semantics Enhance Using-Action Recognition" Brain Sciences 15, no. 11: 1206. https://doi.org/10.3390/brainsci15111206

APA Style

Yu, Y., Huang, Q., Gao, S., & Li, A. (2025). Embodied Cognition of Manipulative Actions: Subliminal Grasping Semantics Enhance Using-Action Recognition. Brain Sciences, 15(11), 1206. https://doi.org/10.3390/brainsci15111206

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop