Spatializing Emotions Besides Magnitudes: Is There a Left-to-Right Valence or Intensity Mapping?

: The Spatial–Numerical Association of Response Codes (SNARC), namely the automatic association between smaller numbers and left space and between larger numbers and right space, is often attributed to a Mental Number Line (MNL), in which magnitudes would be placed left-to-right. Previous studies have suggested that the MNL could be extended to emotional processing. In this study, participants were asked to carry out a parity judgment task (categorizing one to ﬁve digits as even or odd) and an emotional judgment task, in which emotional smilies were presented with four emotional expressions (very sad, sad, happy, very happy). Half of the sample was asked to categorize the emotional valence (positive or negative valence), the other half was asked to categorize the emotional intensity (lower or higher intensity). The results of the parity judgment task conﬁrmed the expected SNARC e ﬀ ect. In the emotional judgment task, the performance of both subgroups was better for happy than for sad expressions. Importantly, a better performance was found only in the valence task for lower intensity stimuli categorized with the left hand and for higher intensity stimuli categorized with the right hand, but only for happy smilies. The present results show that neither emotional valence nor emotional intensity alone are spatialized left-to-right, suggesting that magnitudes and emotions are processed independently from one another, and that the mental representation of emotions could be more complex than the bi-dimentional left-to-right spatialization found for numbers.


Introduction
The Spatial-Numerical Association of Response Codes (SNARC) is a well-documented phenomenon, first described by Dehaene and colleagues [1], consisting of the automatic association between left space and smaller numbers, and between right space and larger numbers. It has been shown that, even when the task is independent of a quantity judgment-for instance, the categorization of numbers as even or odd (parity judgment task)-participants are faster to correctly categorize smaller numbers by using the left responding hand, and larger numbers by using the right responding hand. Moreover, the magnitude of a number is not defined a priori, but it is relative to the specific range presented in a task: when participants were required to carry out the parity judgment task by using two different ranges of numbers, namely from zero to five and from five to nine, the numbers four and five were categorized more quickly with the right hand in the first range (larger numbers), but the same numbers were categorized more quickly with the left hand in the second range (smaller numbers; [1]).
Starting from this pioneering evidence, different findings have been described in support of this automatic association (e.g., [2,3]) in the most disparate tasks, such as time categorization [4], physical size [5] and weights [6], musical pitch [7], non-symbolic stimuli categorization [8], among others, task), or as angry/not angry (angry task). This latter experiment confirmed the existence of the mental magnitude line: the left-to-right orientation for emotional intensity was flexible, as previously shown by Dehaene et al. for numbers [1], being significant in terms of "happiness" in the happy task and in terms of "angriness" in the angry task (from lower intensity to higher intensity emotional stimuli).
Pitt and Casasanto [18] proposed that, concerning emotional stimuli, valence mapping occurs, as opposed to the intensity mapping proposed by Holmes and Laurenco [17]. According to Pitt and Casasanto, in fact, positive emotions would be associated with the dominant hemispace (right space for right-handers) and negative emotions would be associated with the non-dominant hemispace (left space for right-handers). In Experiment 1a, described by the authors, thirty-two right-handers were required to categorize four emotional words (low and high intensity, positive and negative valence words: bad, horrible, good, perfect) by using the left and right hand. In particular, half of the sample was required to categorize stimuli as positive or negative in valence (valence judgment task), and the other half of participants was required to categorize the same words as stimuli expressing lower or higher emotional intensities (intensity judgment task). The results confirmed the occurrence of valence mapping, with faster left/right-hand responses for negative/positive words, respectively, but no intensity mapping was found. Interestingly, in a second experiment, Pitt and Casasanto tried to reconcile their results with those described by Holmes and Lourenco [17]. To this aim, the authors measured the area of the mouth of the facial stimuli used by Holmes and Lourenco, and they showed that the intensity mapping described in that study was explained by this physical feature of the stimuli used. In other words, Pitt and Casasanto showed that the area of the mouth significantly predicted the results described by Holmes and Lourenco (more intense emotional expressions corresponding to stimuli with a larger area of the mouth) and they explained this evidence in the context of the SNARC effect already present for physical size, with lower areas preferentially placed on the left, and larger areas preferentially placed to the right [5].
Nevertheless, Holmes, Alcat and Lourenco's [19] contrasting findings provided important evidence: they replicated the original task with emotional faces [17], for which Pitt and Casasanto [18] have shown that the size of the mouth explained the original findings, but they occluded the mouths of the emotional faces by covering them with a white rectangle. By using this elegant manipulation, the authors provided further confirmation of their initial results, revealing the existence of intensity mapping, even when the mouth of the stimuli was not visible to the observers. The authors themselves, however, highlighted that participants could have inferred the size of the mouth by other facial cues (i.e., eye expression, facial shape). In an attempt to further clarify this possibility, they also replicated the task used by Pitt and Casasanto with emotional words, but with a crucial difference: prior to the categorization task, they gave their participants a reference word. The authors started from the hypothesis that, in the task carried out by Pitt and Casasanto, no intensity mapping was found because participants were required to categorize each word as less or more intense, but without a reference to which each stimulus had to be compared. Thus, they first presented a target word, and then they asked participants to categorize the experimental words as less or more intense with respect to that specific target word. Moreover, in this case, the authors found faster responses for less/more intense emotional words categorized with the left/right hand, respectively.
It can be concluded that the possible left-to-right spatialization of emotions remains unclear. In order to disentangle the existence of intensity mapping [17] and that of valence mapping [18] for emotional expressions, in the present study, a similar protocol as that used by Pitt and Casasanto was employed, but-in order to be in agreement with the original task created by Holmes and Lourenco-emotional words were replaced with emotional expressions. Starting from the criticisms highlighted with regard to the area of the mouth in facial stimuli [18], as well as to the other facial cues which could lead participants to infer the mouth area even when the mouth is covered (namely the eyes and facial shape, [19]), positive and negative expressions with lower and higher intensity were presented by means of smilies. The use of emoticons and smilies is widely widespread in online chats and communications, so these forms of emotional expression have been well documented and studied in the last two decades (e.g., [20][21][22][23]). In smilies, facial emotions are expressed in a simplified and iconic way, by using a curved line to represent the mouth. In the present study, the length of the curved line representing the mouth was kept constant in all the stimuli, resulting in smilies expressing four different emotional expressions (very negative, negative, positive, very positive). The same number of participants as in Experiment 1, described by Pitt and Casasanto, were tested, and-as in their study-the whole sample was divided into two halves: half of the sample was administered a valence judgment task (positive/negative emotions) and the other half was administered an intensity judgment task (lower/higher emotional intensity). Moreover, as carried out by Holmes and Lourenco, the whole sample was also required to carry out a parity judgment task with Arabic numbers. In contrast to previous studies in which the performance in intensity and valence tasks was analyzed only as the difference between left hand and right-hand response times (RTs), in the present study, both RTs and accuracy were considered. Furthermore, in order to investigate the possible spatialization of emotional stimuli independently from the expected SNARC effect, separate analyses were carried out on the numerical task and on the emotional task. It has to be noted, in fact, that Holmes et al. [17,19] and Pitt and Casasanto [18] investigated the emotional spatialization by means of left-right RTs regressed in the performance of the numerical magnitude task, in order to obtain the unstandardized slope coefficient of best-fitting linear regressions. In the present study, numerical and emotional tasks were analyzed separately, because it cannot be ignored that the two stimulus categories (numbers and emotions) are mentally represented independently from one another. Thus, the well-established SNARC effect was hypothesized in the numerical task, used as a control condition to test the validity of the general procedure used. However, the main aim of this study was to assess the possible presence of emotional valence mapping and emotional intensity mapping, independent of the left-to-right MNL for magnitudes, starting from the hypothesis that both of these types of automatic mapping could happen, possibly depending on the specific instructions used. In particular, it has been hypothesized that valence mapping in the valence task and intensity mapping in the intensity task are associated with a better performance for lower intensity and negative emotions categorized with the left hand, and with a better performance for higher intensity and positive emotions categorized with the right hand.

Participants
Thirty-two healthy participants (16 females) took part in the study. The participants' ages were between 20 and 31 years old (mean ± standard error of the age: 24.44 ± 0.42 years-old) and all participants were right handed, with a mean handedness score of 76.01 (± 2.25), as measured by means of the Edinburg Handedness Inventory [24], in which a score of -100 corresponds to a complete left preference, and a score of +100 corresponds to a complete right preference. The sample was randomly divided into two subsamples: sixteen participants were asked to carry out an intensity judgment task (group "Intensity": eight females, age: 24.5 ± 0.78 years-old, handedness: 77.55 ± 3.7), while the remaining sixteen participants were asked to carry out a valence judgments task (group "Valence": eight females, age: 24.38 ± 0.41 years-old, handedness: 74.47 ± 2.86). The whole sample also completed a parity judgment task (for more details, see the Procedure section). Participants had normal or corrected-to-normal vision, were unaware of the purpose of the study and were tested in isolation.

Stimuli
The Experiment included a numerical test and an emotional test. All stimuli were constituted by images created by using Microsoft PowerPoint 2007 (Microsoft Corp., Redmond, WA, USA): in the numerical test, stimuli were the Arabic numbers 1, 2, 4 and 5 presented in black with a height of 12 cm, font Arial; in the emotional test, stimuli were "smilies" (see Figure 1). In particular, four different schematic face-like stimuli were used as smilies: a circle was created with a diameter of 12 cm and two black dots were placed inside the circle at 4.5 cm from the upper point of the diameter and spaced apart 5.5 cm from each other, to represent the eyes. The four smilies differed from one another for the curved line used to represent the mouth: two smilies were created with a negative emotional expression (sadness), by inserting a downward curved line, in the other two smilies, the same curve was rotated by 180 • to represent a positive emotional expression (happiness). In all the smilies the curve representing the mouth measured 9.42 cm, but the two stimuli in each emotional category (sadness and happiness) differed from one another in terms of the curvature of the "mouth" representing the intensity of the emotional expression. Specifically, two different arches were calculated with the formula "diameter × 3.14 × angle/360 = arc", one corresponding to 90 • of a 12 cm circumference (a "wider" curve, representing a less intense expression: 12 cm × 3.14 × 90 • /360 = 9.42 cm), with the other corresponding to 180 • of a 6 cm circumference (a "less wide" curve, representing a more intense expression: 6 cm × 3.14 × 180 • /360 = 9.42). Indeed, four smilies were obtained, showing (i) very happy, (ii) happy, (iii) sad and (iv) very sad emotional expressions, with the curve representing a mouth of the same length. Lines were drawn in black and the "face" was colored in yellow ( Figure 1).

Procedure
Each participant carried out both the numerical test and the emotional test, and each test was divided into two sessions, which differed in terms of the association between responding hand (left/right) and response. The order of tests and sessions was randomized across participants and short breaks were inserted between two consecutive parts, in order to allow participants to rest and to focus on the new instructions. Each session consisted of four stimuli (numbers one, two, four and five for the numerical test; smilies with very sad, sad, happy, very happy expressions for the emotional test) repeated 24 times, for a total of 96 trials in each session. In each trial, a black fixation cross was presented in the center of a white screen for 500 ms, and it was followed by the stimulus (number or smiley), presented in the center of the screen, until the participant gave a response or for a period of 2000 ms (if the response was slower than 2000 ms, the stimulus disappeared and the response was not recorded). After the stimulus disappeared, the screen became black for 500 ms and then the next trial started (Figure 2). The order of the stimuli was randomized within and across participants and sessions. Prior the beginning of each session, written instructions were presented and participants were asked to provide their responses as quickly and as accurately as they could. Sixteen stimuli were presented prior to each session, to allow the participants to familiarize themselves with the task and with the responding keys, and these responses were excluded from the analyses.
In the numerical test, all participants were required to categorize each number as even or odd (parity judgment task), by pressing two different keys using the left index finger (key "g") and the right index finger (key "ù"). The association between the even/odd response and the left/right key was inverted in the two sessions of the test. In the emotional test, the "Intensity" subgroup was asked to judge the intensity of the emotional expression depicted in each smiley as lower/higher emotional intensity (independent of the positive or negative valence-intensity judgment task), whereas the "Valence" subgroup was asked to judge the valence of the emotional expression depicted in each smiley as positive/negative valence (independently of the lower or higher intensity of the expression: valence judgment task). For both the valence and the intensity judgment task, the association between either positive/negative or lower/higher response and the left/right key was inverted in the two sessions of the test. Indeed, all participants carried out both the numerical test (parity judgment task) and the emotional test (either valence or intensity judgment task) with the two possible associations between hand and response.
The whole protocol was controlled by means of the E-Prime software (Psychology Software Tools Inc., Pittsburgh, PA), and lasted about 15 min. The experimental procedures were conducted in accordance with the guidelines of the Declaration of Helsinki.

Data Analysis
Statistical analyses were carried out using Statistica 8.0 software (StatSoft, Inc., Tulsa, OK, USA), with a significant threshold of p < 0.05. Data were analyzed by means of repeated measures analyses of variance (ANOVAs) and post-hoc comparisons were carried out by using the Duncan test. In order to consider both accuracy and response times in an overall analysis, Inverse Efficiency Scores (IES) were used as the dependent variable. IES were obtained dividing the response times of the correct responses by the proportion of correct responses in each condition [25][26][27], so that lower IES corresponded to more accurate and faster responses than higher IES (lower IES correspond to a better performance). Reaction times were excluded when they were lower than 100 ms and higher than 1000 ms.
An initial ANOVA was carried out on the performance of the whole sample in the parity judgment task: responding hand (left, right), parity (even, odd) and magnitude (smaller, larger) were considered as within-subject factors. Then, two different ANOVAs were carried out on the emotional test, considering the responding hand (left, right), valence (sad, happy) and intensity (low, high) as within-subject factors: one ANOVA involved the responses of the "Intensity" subsample (intensity judgment task), the other ANOVA involved the responses of the "Valence" subsample (valence judgment task). In the first step, the gender of participants was considered as a between-subjects factor for each of the three ANOVAs (parity, intensity and valence judgment tasks), but it was not significant and did not interact with the other factors, and thus it was excluded from the analyses.

Results
The first ANOVA carried out on the parity judgments revealed a significant main effect of parity (F (1,31)  showing a better performance with the left hand for smaller than for larger numbers (p = 0.016), and with the right hand for larger than for smaller numbers (p = 0.004). Moreover, when smaller numbers were presented, the performance was better with the left hand than with the right hand (p = 0.016), whereas when larger numbers were presented, the performance was better with the right hand than with the left hand (p = 0.004).  Figure 4).
Post-hoc comparisons showed that the performance in the valence judgment task was better for smilies representing a higher intensity happy expression than a sad expression, with both the left hand (p = 0.036) and the right hand (p < 0.001). Importantly, the happy expression was better categorized with the right hand when the intensity was higher than lower (p < 0.001), whereas when the emotional intensity was lower, the happy expression was better categorized with the left hand than with the right hand (p = 0.012).

Discussion
The present study was aimed at investigating whether intensity mapping and valence mapping occur during facial emotion categorization, in line with a MNL expected in a Western sample with a left-to-right reading/writing system. The experimental idea starts from two different findings: on the one hand, Holmes and Lourenco [17] found that participants were faster at categorizing less/more intense emotional expressions with the left/right hand, respectively, independent of the emotional valence (namely, an intensity mapping). The authors concluded that possibly all stimulus dimensions could be mentally placed on a left-to-right mental space, similar to the left-to-right placement already found for numbers (SNARC effect). On the other hand, Pitt and Casasanto [18] disproved this conclusion, showing that the intensity mapping described by Holmes and Lourenco was due to the size of the mouth of the stimuli used, and that this intensity mapping disappeared when emotional words were used as stimuli. Even though a following study also confirmed intensity mapping when facial stimuli were presented (with the mouth occluded [19]), it remains unclear whether, in the latter case, participants could infer the size of the mouth from different facial cues. In the present study, this possible confounding effect has been bypassed, by presenting emotional smilies instead of human faces. In this way, the mouth of the stimuli has been represented by means of a bi-dimensional line, avoiding the problem of the mouth area. Moreover, although simplistic, the smiley also allowed us to avoid other facial cues (such as facial shape and eye expressions), which could possibly be exploited by the observers to infer the expression of the mouth. In summary, emotional smilies allow us to control for the possible confounding effects by ensuring that a positive or negative emotion with a lower or higher intensity is represented only by means of a curved line representing the mouth, and this line can be of the same length in all of the emotional expressions, avoiding other physical confounding effects (i.e., left/right preference for shorter/longer lines). Moreover, a classical parity judgment task was also administered to all participants, in order to validate the specific protocol used. Emotional and numerical tasks, however, were analyzed separately in an attempt to investigate each of the two tasks independent of each other, in order to find a solution to the contrasting results previously found using emotional stimuli.
The present results confirmed the SNARC effect for one-digit numbers, even if the range used was really small (only four stimuli were presented), showing a better performance with the left or right hand when smaller or larger numbers, respectively, were categorized as even or odd (see also [28]). Moreover, the performance was better for even than for odd numbers, confirming the previous evidence of an "odd effect" [29]. Thus, all of the expected findings were confirmed in the numerical task, showing that the protocol used (number of trials, keys of response) was adequate to allow for the left-to-right spatialization of the stimuli presented.
In both the valence and the intensity judgment tasks, positive emotions were better categorized than negative emotions, revealing a higher proneness to recognize positive over negative expressions. This result could be explained by the fact that the mouth is the most important facial region in recognizing happiness, whereas sadness and anger are mainly recognized from the eyes [30]. In this study, all smilies had the same non-emotional representation of the eyes (two black dots), with emotional valence and intensity being conveyed only by the line representing the mouth. This could have led participants to mainly focus on the mouth and thus to better process the positive expression.
Importantly, only in the valence judgment task was the interaction between the responding hand, valence and intensity significant, confirming not only a positive expression advantage (with respect to the negative expression) for higher intensity emotions, but also revealing an interaction that seems to confirm both the valence mapping and intensity mapping: a better performance was found, in terms of both accuracy and response times, when the positive expression was categorized with the right hand (valence mapping: positive emotion better categorized with the right hand), but only when the emotional intensity was higher with respect to the condition in which the emotional intensity was lower (intensity mapping: higher intensity emotion better categorized with the right hand). In the same vein, when the emotional intensity was lower, the positive emotion was better categorized by using the left rather than the right hand. This pattern of results suggests that neither valence alone nor intensity alone are mentally placed on a left-to-right mental line, but that they can interact with one another. In other words, intensity mapping can occur (lower/higher intensity better categorized with the left/right hand, respectively), but only for the positive expression and only when an emotional categorization task is required. As specified above, the fact that this interaction is significant only for the positive expression could be attributed to the higher salience of the mouth in the recognition of happiness [30], and to the fact that, in the stimuli used here, only the mouth changed among the stimuli, conveying the specific emotion. According to the present results, it can be concluded that an interaction exists among space and emotions that involves both emotional valence and emotional intensity. This pattern of results can be viewed as being in line with the existence of intensity mapping, as proposed by Holmes et al. [17,19], but it also highlights the importance of orienting the attention of the observers on the emotional valence in order to find such evidence of mapping, in accordance with Pitt and Casasanto [18].
However, another possible explanation can be proposed, starting from the fact that-in contrast to the results described in the abovementioned studies-the present results were obtained by analyzing the numerical task and the emotional task separately. It can be suggested, in fact, that the results of the present study put in doubt the existence of a bi-dimensional left-to-right mental disposition of facial emotional stimuli. As has been previously demonstrated, the MNL, which is often considered the basis for the SNARC effect, is functionally attributed to the activity of the parietal cortex, a multisensory associative area responsible for the processing of both space and magnitudes [9]. In support of this view, it is well-known that the parietal lobe is a crucial region for visuospatial processing, and that parietal damage can cause Gerstmann syndrome, involving dyscalculia, agraphia, finger agnosia, and left-right confusion [31]. Concerning emotions, there is no cerebral area specifically involved in all emotional processing. In fact there are different cortical and subcortical structures that correspond to the detection of emotions [32][33][34][35], involving both temporo-parietal sites and prefrontal areas, besides subcortical limbic structures. Moreover, cerebral areas involved in emotional detection could be different according to the stimulus categories [36], and thus the results found by presenting emotional words are hardly comparable with those obtained by presenting emotional faces [37][38][39]. Looking at the results found in previous studies, it can be hypothesized that the intensity mapping described by Holmes and colleagues [17,19] can be attributed to the area of the mouth of the facial stimuli used, as proposed by the authors themselves, and that the valence mapping described by Pitt and Casasanto using emotional words [18] might involve different cerebral circuits than those involved in the processing of emotional faces (it must be noted that, by varying the protocol, Holmes et al. [19] did not find support for valence mapping by means of emotional words). Thus, it can be speculated that the cerebral substrate for magnitudes and emotions (in particular, emotional faces) is different, being localized in the parietal cortex for magnitudes and being more distributed for emotions. At a cerebral level, a left-to-right orientation in the (imagery or real) space means a right-to-left hemispheric involvement, given that the perceptual and motor pathways are organized in a contralateral way (i.e., left hemisphere/right hemispace and vice versa). In this model, the right hemisphere would be specialized in processing negative emotions (valence mapping) and/or emotions with a lower intensity (intensity mapping), and the left hemisphere would be specialized in processing positive/higher intensity emotions. This hemispheric asymmetry has been widely investigated in the literature on emotional processing and it is known as the "valence hypothesis" [40]. Such a theory of hemispheric asymmetries for positive and negative emotions, however, is considered to be opposed to another well-documented hypothesis, in which a right-hemispheric superiority is postulated for all emotions, including both negative and positive valence (known as the "right hemisphere hypothesis"; see [41]). These two main theories on the cerebral substrate of emotional processing were reconciled in a model in which, after a first right-hemispheric posterior activation for all emotional stimuli, a frontal left/right activity would occur specifically for positive/negative emotions, respectively [42,43]. In this frame, it could be hypothesized that only when a valence judgment is required are the emotional pathways activated. Then, when the emotion to be judged is negative in valence, it would activate only the right hemisphere (according to both the valence hypothesis and the right hemisphere hypothesis), whereas when the emotion to be judged is positive in valence, it would also activate the left hemisphere (valence hypothesis). It can be speculated that this is the reason why a left/right asymmetry occurs only for the positive expression: the activation of both cerebral hemispheres for the happy expression (a posterior right-hemispheric activity followed by an anterior left-hemispheric activity), would lead, evidently, to a behavioral asymmetry only when the attention of the observers is focused on emotional valence (otherwise there would not be the activation of the emotional pathways), and only when a positive expression is presented (otherwise, the negative valence involves only the right hemisphere). This speculation is also in agreement with the finding that the expected SNARC effect for numbers was found here, revealing that the MNL at the basis of this left-to-right asymmetry is independent of the possible asymmetries for emotional stimuli. In light of this, it could be concluded that asymmetries exist for both magnitudes and emotions, but that they are not dependent upon one other, and that they have to be considered as being based on different cerebral circuits.
The well-established left-to-right MNL for numbers has been found here, confirming that the interaction between responding hand and magnitude is so strong to that it can also be found by using a restricted range of numbers (from one to five). However, this bi-dimensional mental arrangement for simple stimuli could be inadequate for more complex stimuli, such as emotional faces, in which different characteristics relate to their processing. These characteristics interact with one another, leading to a complex pattern of interaction, which involves hemispheric asymmetries, emotional intensity and valence, but possibly includes other variables not considered here as well (e.g., stimulus categories). For instance, despite the fact that the SNARC effect for magnitudes has also been described by presenting stimuli in the auditory modality [44], different patterns of asymmetries were found for visual and auditory emotional stimuli [45,46], and this represents another piece of evidence in support of the fact that emotional processing is a complex cognitive task, which cannot be reduced to a simple left-to-right bi-dimensional organization, taking place in a specific cortical area.
A final remark has to be made about the fact that the present results, as well as most of the previous findings described in this study, refer to Western participants, in which the left-to-right reading-writing direction overlaps with the expected (and confirmed) left-to-right placement of numbers (i.e., SNARC effect as due to the MNL). As mentioned in the Introduction, however, the directionality of the MNL is much less clear in cultures with an opposite reading and writing direction [47], and future studies should take into consideration the possible cross-cultural difference in this domain, in order to shed more light on the role of language (i.e., culture) on the mental representation of magnitudes, space, time, emotions, and so on.

Conclusions
The present results showed that neither clear valence mapping nor clear intensity mapping occur when participants-who show the SNARC effect in a parity judgment task-are asked to categorize stimuli that resemble faces. In fact, the interactions between responding hand and valence, and between responding hand and intensity, were not significant. Nevertheless, the interaction between responding hand, valence and intensity suggests that when participants' attention is focused on the emotional valence (a qualitative or metathetic dimension), positive emotions with lower and higher intensity are better categorized with the left and right hand, respectively. Emotions are complex dimensions and it is possible that their mental representation is more complex than a left-to-right bi-dimensional magnitude line, possibly involving different features, including valence and intensity, among others, and involving both cortical and subcortical structures.
Funding: This research received no external funding.