Self–other discrimination is pivotal in social interaction (e.g., [1
]). Face recognition plays a major role in distinguishing between self and other (e.g., [4
]), whereas other body parts are assumed to play a less important role. However, neuroimaging studies have revealed that a specific brain area is selectively activated when perceiving body parts (i.e., the extrastriate body area, EBA) (e.g., [5
]; see [8
] for a review), in addition to the facial processing-specific fusiform face area [9
]. The discovery of body-specific EBA has sparked increasing research interest regarding the ways in which body parts are perceived and recognized (e.g., [10
The hand is an undoubtedly crucial body part and serves as an important interface between the external world (including other persons) and the individual self. Researchers are interested in many aspects of the hand [13
]. For example, the role of the hand has been investigated in non-verbal communication (e.g., [15
]), motor behavior (e.g., [16
]), and haptic perception (e.g., [17
]). In computer vision studies, how a 3D hand structure could be estimated and reconstructed from a 2D image (e.g., [18
]) has been investigated. In one neuroimaging study, Bracci, Ietswaart, Peelen, and Cavina-Pratesi [20
] found a hand-preferring region in the EBA, suggesting that the representation of the hand in the extrastriate visual cortex is distinct from the representation of other body parts. Other researchers have found that the right EBA responds more to an allocentric view of body parts than to an egocentric view, and that this preference in the right EBA for the allocentric view could be linked with social cognition (i.e., self–other discrimination) [21
]. In addition to these studies, the ways in which people visually distinguish between their own hands and the hands of others have been recently investigated, as discussed below.
Frassinetti and colleagues [11
] reported a “self-advantage” effect in the visual matching-to-sample task, finding that self-related body stimuli are processed faster and more accurately than other-related body stimuli. Although the task used by Frassinetti et al. involved visual matching, it did not require the identification of hand ownership. Ferri, Frassinetti, Constantini, and Gallese [24
] further investigated this effect using two tasks: one task involved hand laterality judgments of self hand and another’s hand, in which participants did not need to explicitly recognize the identity of the hand; the other task involved explicit self hand recognition. Ferri et al. [24
] reported that the self-advantage effect emerged only when performing the laterality judgment, not when performing the explicit self-recognition task. Regarding the effects of visual perspective on an explicit self-recognition task, Conson, Aromino, and Trojano [25
] reported that both right- and left-handed individuals were faster at recognizing their own dominant hands from an egocentric perspective and the other’s non-dominant hands from an allocentric perspective.
Other persons can be classified from stranger to spouse in terms of “mental closeness” to oneself (i.e., familiarity, intimacy). For example, a romantic partner is regarded as a part of the self [26
], and people indeed experience “self–partner confusion” concerning personality traits when they attempt to recognize them [26
]. It has been found that face recognition is modulated by personal familiarity (e.g., [28
]). The extent of interaction via the hands (e.g., touching another person’s body) changes according to the mental closeness of the individuals involved. Therefore, visual hand recognition could also be modulated by personal familiarity, although the personal identification of bodies would be assumed to be more difficult than that of faces. Answering this question could contribute to the further understanding of the mechanisms underlying self–other discrimination.
The aim of the current study was to investigate whether and how close relationships affect performance on an explicit hand discrimination task (cf. [30
]). Specifically, we examined whether the explicit recognition of a partner’s hand is different from the explicit recognition of the hand of another person with whom the participant has no personal relationship. If we perform a hand laterality judgement task, deeper insight into the mechanisms of hand recognition will be provided. However, the current study included a “deceptive” procedure when taking photographs of hands, as described below. The hand laterality judgement task has to be performed before obtaining participants’ agreement for this deceptive procedure. Therefore, the hand laterality judgement task was not performed primarily for ethical reasons.
The current study explored how close relationships affect the performance of an explicit hand recognition task that involves comparing a partner’s hand with an unknown person’s hand. In addition to this comparison, participants in another group completed an explicit self hand recognition task to test whether the result of Ferri et al. [24
] could be replicated.
In Task 1, a longer RT was found for self hand image compared with images of another person’s hand. Although no significant difference in accuracy was found in Task 1, the current results are in accordance with the results of Ferri et al. [24
]. Specifically, a “self-advantage” effect was not revealed when the participants were required to explicitly recognize hand identity. This result contrasts with the results of the hand laterality judgment task using self and other’s hand images (this task could be regarded as an implicit recognition task of hand ownership), in which the self-advantage effect emerged. Concerning the results of the self–other discrimination and hand laterality judgment tasks, previous studies [24
] have suggested that motor simulation based on implicit sensorimotor knowledge about body parts (in this case, the hands), including a combined visuo-sensorimotor strategy [34
], is necessary for the self-advantage effect. Indeed, Conson et al. [36
] demonstrated that the self-advantage effect could emerge even in the explicit hand recognition task, when the task required a high “sensorimotor load” (see also [37
]). Therefore, the cost of sensorimotor load when performing a task, rather than a dissociation between implicit versus explicit representations about self body, is an essential component in the identification of one’s own hand image.
As described in the introduction section, the activation in the right EBA for the allocentric view of body parts is larger than that for the egocentric view [21
], and this view difference (egocentric or allocentric) is an important cue for self–other discrimination (see also [38
]). Conson et al. [25
] performed a task similar to Task 1 in the present study, and their results showed an interaction between hand ownership, hand image laterality, and visual perspective, such as significantly faster RTs in the recognition of the other’s left hand image than in the recognition of the other’s right hand image when an upside-down image was presented, thereby confirming the association between hand ownership and visual perspective. Regarding visual perspective, the present study (Task 1) revealed (1) slower responses to the allocentric view of the hand image when responding with the right hand, and (2) the greater self-bias of the left hand’s response (in comparison with the right hand’s response) when the hand image was presented by the egocentric view. Although the reason why slower responses to the allocentric view emerged only when responding with the right hand (not with the left hand) will be clarified, the second result could be discussed from the association between self-recognition (favored by the egocentric view) and the right hemisphere, as described below.
Several studies have reported that hand responses are influenced by the controlling contralateral hemisphere when participants are required to respond to stimuli that are strongly lateralized in hemispheric processing [39
]. Based on this evidence, Keenan et al. [42
] performed a face-recognition task and found that self face stimuli were identified more rapidly than non-self face stimuli when the participants responded with their left hand, indicating that self-recognition may be correlated with activity in the right hemisphere. Although the current RT results were not in line with Keenan et al.’s [42
] findings, the lower ln(β) when responding to the upright (egocentric) image with the left hand was close to zero (0.09) compared with the larger ln(β) in other conditions, where the response was biased toward the others (approximately 0.47). Of course, “close to zero” means neutral response bias and therefore cannot be described as a self-bias response. However, it could be argued that the response manner for the upright image with the left hand is relatively biased toward the self in comparison with other conditions, while the reason for the general response bias toward others in the current experiment requires further clarification. This finding suggests that the processing of self-recognition mediated by the right hemisphere was relatively strongly activated in the upright image (egocentric view) condition when responding with the left hand. However, a recent study by De Bellis et al. [43
], using the same experimental paradigm of Ferri et al. [24
] and the on-line high-frequency repetitive transcranial magnetic stimulation (rTMS) technique, revealed that the right EBA stimulation enhanced the implicit visual processing of others’ hands, whereas the stimulation of the left EBA led to a self-advantage. Therefore, how the left EBA interacts with other areas of the right hemisphere should be investigated more extensively in future studies. It should be noted that the primary difference in the experimental procedure used between Conson et al. [25
] and the present study is that Conson et al. [25
] used a foot response, whereas the present study employed a hand response. Whether and how the response manner (i.e., hand or foot) affects a hand recognition task must also be clarified.
The main focus of the current experiment was on the effect of a close relationship (familiarity) on explicit hand recognition. Specifically, based on the “self-expansion (to the romantic partner)” account [26
], focus was placed on whether the “partner-disadvantage” effect (in comparison with unknown persons), such as the self-disadvantage effect, emerges when performing explicit hand recognition. The results revealed longer RTs for images of a partner’s hand compared with images of another person’s hand, indicating slower responses to the partner’s hand image than to images of an unknown person’s hand, regardless of whether responses were made with the right or left hand. In the case of face recognition, Kircher et al. [44
] argued that one’s own face is not processed differently on a behavioral level compared with another, over-learned, emotionally salient face (i.e., the partner’s face). The current results revealed longer RTs in the self and partner’s hand conditions compared with the other and unknown person’s hand conditions. As for the response bias, significantly smaller ln(β) values when responses were made with the left hand (−0.031) in comparison with the right hand (0.553) were also found in the partner/unknown opposite-sex person discrimination task (Task 2), indicating that participants’ responses were relatively biased toward the partner responding with the left hand in this experimental situation. The results revealing a bias toward the partner responding with the left hand (Task 2) and a bias toward the self for the upright image with the left hand (Task 1) indicate that the explicit hand recognition of the self and partner are similarly processed by the right hemisphere and mediated by the left hand. These results suggest that the explicit recognition processes of a romantic partner’s hand might be (at least partially) similar to those of one’s own hand, implying that self-expansion to a romantic partner could occur at explicit visual hand recognition.
The limitation of this study is the relatively small sample size, and therefore a future study designed by power analysis using the effect size obtained from the current study must be conducted. One remaining question is how the implicit hand recognition of partners is processed, so a hand laterality judgment task using the hand image of partners and unknowns should also be performed. In addition, the relationship length of each participant in Task 2 was relatively short because the data were collected from university students. Thus, future studies in which relationships (e.g., lover or spouse) and relationship lengths are included as independent variables are needed.
Overall, the current results indicate that the existence of a close relationship can induce the process of hand recognition when explicitly recognizing a partner’s to be similar to the process of explicitly recognizing one’s own hand.