Next Article in Journal
Consumer Preference for Food Bundles under Cognitive Load: A Grocery Shopping Experiment
Previous Article in Journal
Quantitative Microbial Risk Assessment of Listeria monocytogenes and Enterohemorrhagic Escherichia coli in Yogurt
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Linking Categorical and Dimensional Approaches to Assess Food-Related Emotions

by
Alexander Toet
1,*,
Erik Van der Burg
1,2,
Tim J. Van den Broek
3,
Daisuke Kaneko
1,4,
Anne-Marie Brouwer
1 and
Jan B. F. Van Erp
1,5
1
TNO Human Factors, Netherlands Organization for Applied Scientific Research, Kampweg 55, 3769 Soesterberg, The Netherlands
2
Brain and Cognition Department, University of Amsterdam, 1012 Amsterdam, The Netherlands
3
TNO, Netherlands Organization for Applied Scientific Research, Research Group Microbiology & Systems Biology, 3700 Zeist, The Netherlands
4
Kikkoman Europe R&D Laboratory B.V., Nieuwe Kanaal 7G, 6709 Wageningen, The Netherlands
5
Research Group Human Media Interaction, University of Twente, 7522 Enschede, The Netherlands
*
Author to whom correspondence should be addressed.
Foods 2022, 11(7), 972; https://doi.org/10.3390/foods11070972
Submission received: 5 March 2022 / Revised: 22 March 2022 / Accepted: 23 March 2022 / Published: 27 March 2022
(This article belongs to the Section Sensory and Consumer Sciences)

Abstract

:
Reflecting the two main prevailing and opposing views on the nature of emotions, emotional responses to food and beverages are typically measured using either (a) a categorical (lexicon-based) approach where users select or rate the terms that best express their food-related feelings or (b) a dimensional approach where they rate perceived food items along the dimensions of valence and arousal. Relating these two approaches is problematic since a response in terms of valence and arousal is not easily expressed in terms of emotions (like happy or disgusted). In this study, we linked the dimensional approach to a categorical approach by establishing mapping between a set of 25 emotion terms (EsSense25) and the valence–arousal space (via the EmojiGrid graphical response tool), using a set of 20 food images. In two ‘matching’ tasks, the participants first imagined how the food shown in a given image would make them feel and then reported either the emotional terms or the combination of valence and arousal that best described their feelings. In two labeling tasks, the participants first imagined experiencing a given emotion term and then they selected either the foods (images) that appeared capable to elicit that feeling or reported the combination of valence and arousal that best reflected that feeling. By combining (1) the mapping between the emotion terms and the food images with (2) the mapping of the food images to the valence–arousal space, we established (3) an indirect (via the images) mapping of the emotion terms to the valence–arousal space. The results show that the mapping between terms and images was reliable and that the linkages have straightforward and meaningful interpretations. The valence and arousal values that were assigned to the emotion terms through indirect mapping to the valence–arousal space were typically less extreme than those that were assigned through direct mapping.

Graphical Abstract

1. Introduction

1.1. Categorical and Dimensional Food-Elicited Emotion Assessment

One of the major challenges in food marketing and consumer studies is to understand the motivations that drive consumer choices. It appears that emotions play an essential role in the food choices of consumers [1,2]. As a result, knowledge of hedonic (liking) ratings alone is not sufficient to accurately predict food choice behavior [3,4].
Despite its long history, emotion research still has not reached a consensus on the fundamental nature of emotions [5,6,7]. The two main prevailing and opposing views are (1) the discrete or categorical approach, which assumes that there are a limited number of discrete, universal emotions (possibly subserved by different brain mechanisms) [8,9] and (2) the constructionist approach, which is based on the assumption that the brain constructs emotions along a few continuous dimensions (e.g., valence and arousal) [10]. These different conceptualizations of emotions have resulted in different methodologies that may be used to investigate emotion perception.
A widely used categorical approach to assess emotional responses to food and beverages is through lexicon-based (verbal) tools that enable users to select and/or rate the words that best express their food-related feelings [11]. A popular and standardized lexicon of food-related affective terms that is used for this purpose is the EsSense Profile (39 terms: [12]). Users report their food-related emotional response with the EsSense Profile by rating each of its 39 terms on a 5-point intensity scale [1,2,13]. Next to emotions, the EsSense Profile also includes diffuse affective states such as moods, which are characterized by prolonged subjective feelings (e.g., “loving” or “affectionate”; [14]). When used in a check-all-that-apply (or CATA) paradigm, users are instructed to select those EsSense Profile terms that apply to the focal sample [15]. Alternatively, in a rate-all-that-apply (or RATA) paradigm, users are instructed to also rate the extent to which each selected term applies to the focal sample [16].
Although widely used, it has been argued that lexicon-based tools force people to express their feelings through a limited set of prescribed words, resulting in rationalized answers that do not necessarily reflect the unconscious influences that play a major role in emotional food perception [17]. Moreover, people typically find it difficult to express or verbalize their emotions (especially for mixed or complex ones) and the labels (emotion terms) that are provided to describe them are often inherently ambiguous [17,18] or even perceived as strange or irrelevant in a food-related context [19]. These considerations have stimulated the development of graphical (non-verbal) tools that allow users to report their feelings in a more intuitive manner by indicating or rating the figures that best represent their current affective state (for a review see [20]). It has, for instance, been shown that people reliably and intuitively use emoji (facial icons) to report their food-related emotional experiences [21,22,23,24,25,26].
Next to the lexicon-based (categorical) approach, there is also a dimensional approach to the assessment of (food-related) emotions. In this approach, valence (the pleasantness; the degree of positive or negative affective response to a stimulus) and arousal (the intensity of the affective response to a stimulus) are adopted as the principal dimensions of emotions in general (the circumplex model of human core affect [8,27]) and of food-evoked emotions in particular [13]. Valence and arousal both play a distinct and critical role in eating-related behavior [28]. A parsimonious graphical (language independent) self-report tool that was specifically developed for this dimensional approach to the affective appraisal of food is the EmojiGrid: a square grid (resembling the Affect Grid [29]) that is labeled with emoji expressing different degrees of valence and arousal ([20]; see Figure 1; see also https://en.wikipedia.org/wiki/EmojiGrid (accessed on 4 March 2022). The EmojiGrid enables its users to intuitively report their affective state with a single response by placing a checkmark at the appropriate location on the grid [11,20]. It has been observed that verbal labeling diminishes one’s response to affective stimuli [30]. Intuitive (graphical) self-report tools that limit analytical thinking may therefore be preferred for measuring (food-related) emotions since they may tap more directly into the irrational and non-cognitive thought processes that are involved in food choice than verbal methods can [17]. Hence, tools like the EmojiGrid may yield responses that more closely reflect the truly experienced emotions than self-reports that are obtained with verbal tools [2].
The dimensional and categorical approaches are complementary. While the parsimonious dimensional approach using the EmojiGrid affords an intuitive and efficient method to assess food-related emotions in terms of their resultant valence and arousal, it does not provide a description of the response in terms of discrete or basic emotions [31], unlike a lexicon-based categorial approach. Although most people will understand what it means that some kind of food is rated as either positive or negative (valence) and that it evokes a given amount of arousal, the corresponding point in the two-dimensional valence–arousal space has no specific meaning and cannot easily be communicated [32]. For instance, it would seem weird to tell someone that you feel 2.7 positive and 1.4 aroused. Combining the two approaches could allow one to quickly zoom in on the most salient features of a response (valence and arousal), followed by a further, more detailed inspection of the distinct underlying factors (emotions) that are contributing to that response and to express these in words [33].

1.2. Related Work

Using a CATA paradigm with cashew nuts, peanuts, chocolate, fruit and processed tomatoes as the focal product categories, participants in a study by Jaeger, Spinelli, Ares and Monteleone [34] reported their sensory product perceptions (in terms of sensory attributes like appearance, flavor, taste, texture and odor) and associations with emotion terms (from the EsSense Profile). Relationships between the resulting food-elicited emotional associations and the sensory terms were established by mapping both to the circumplex model of human core affect through correspondence analysis. While many of these relationships were easy to interpret, others were less obvious. Jaeger et al. [34] suggested to further validate their mapping of emotion terms to the valence–arousal space through a direct mapping procedure.
Scherer et al. [32] linked a dimensional and a categorical approach to emotion assessment through the Geneva Emotion Wheel (GEW) graphical response tool. In the GEW, 20 emotion terms are equidistantly spaced around the circumference of a circular two-dimensional space representing the dimensions of valence and control/power. Different emotion terms (that only appear when the user moves a cursor over their position) are placed inside the circle, such that their intensity increases with their distance from the center. Thus, the GEW combines three response dimensions (i.e., valence, intensity, and control/power) in a two-dimensional representation. However, the control/power dimension is rather abstract and appears difficult for users to rate. Although the GEW was developed as a general instrument for the measurement of emotional response to affective stimuli, its emotion terms do not apply to food-elicited emotions. Also, arousal is not explicitly measured. Although arousal and intensity are related, both are distinct concepts, which are not linearly related [35]
Lorette [6] linked the discrete categorical approach to the continuous dimensional approach through a two-step instrument called the Two-Dimensional Affect and Feeling Space (2DAFS). The 2DAFS is a clickable and labeled affect grid that is followed by the presentation of a valence–arousal space that is labeled with 36 basic emotion terms. The emotion terms were centered at valence and arousal coordinates that had been determined in previous (unrelated) studies in which these terms had been rated for their valence and arousal [36,37,38]. After reporting their appraisal (in terms of valence and arousal) of the emotional stimuli by clicking on the affect grid, the users can further categorize their response by selecting one or more words from the spatially ordered set of emotion terms. Since the emotion terms are positioned according to their valence and arousal ratings, terms that probably apply most (and are therefore most likely to be selected by the user) are arranged closest to the location where the user clicked on the grid, enabling an efficient and fast response. Although the 2DAFS has been developed as a general instrument to measure emotional responses to affective stimuli, its emotion terms are not suitable for use to characterize food-elicited emotions. A further limitation of the instrument is that participants can only choose one emotion term per response, thus preventing the reporting of mixed emotions.

1.3. Linking the Different Approaches

The goal of the present study was to combine the parsimony of a single-response dimensional (EmojiGrid) approach with the specificity of lexicon-based tools by linking food-related emotion terms to the two-dimensional valence–arousal space. Through such relations, EmojiGrid responses can be expressed in terms of (weighted combinations of) different discrete emotions. The availability of a language-independent dimensional single-response tool like the EmojiGrid that also affords a categorical verbal output will be beneficial for cross-cultural studies and for studies involving children or low-literate people.

2. Methods and Procedures

2.1. Overview of the Approach

The participants in this study performed four different tasks (see Figure 2). In order to investigate the mapping between food images and the valence–arousal space, the participants rated their emotional responses to different food images using the EmojiGrid (Image2Grid task: matching each food image to the most appropriate location on the EmojiGrid [11,20]). To establish the mapping between the emotion terms and the food images, the participants (a) matched each food image to the most appropriate labels from a set of simultaneously presented emotion terms (Image2Label task: matching food images to emotion terms) and (b) assigned each of these emotion terms to a selection from a set of simultaneously presented food images (Label2Image task: labeling food images with emotion terms). To establish a direct mapping between the emotion terms and the valence–arousal space, the participants attributed each emotion term to the most appropriate location on the EmojiGrid (Label2Grid task: labeling the EmojiGrid with emotions terms).
Note that the mappings between the emotion terms and the food images that result from the matching and labeling tasks need not be identical. Also, the EmojiGrid positions (the valence and arousal values) that are assigned to the terms and images through the matching and labeling tasks need not be the same. In the ‘matching’ conditions, the participants first imagined how the food that was shown in the image would make them feel and then selected either the emotion terms or the EmojiGrid position that best described their feelings that are associated with that food. In the ‘labeling’ conditions, the participants first imagined how a given emotion term felt and then they selected either the foods (images) that appeared capable to elicit that feeling or reported the combination of valence and arousal (using the EmojiGrid) that best reflected that feeling. As a result, the mapping between the emotion terms and images need not be bidirectionally identical. The process of matching yielded the assignment of each food image to those emotion terms that are most characteristic for that image, while the process of labeling yielded the assignment of each label to all of the food images to which it may apply to some degree. Hence, the labels that are assigned to an image need not correspond to the emotions that are intuitively and most intensely experienced (that first come to mind) when seeing that image. Testing both the matching (Image2Label task) and mapping (Label2Image task) conditions in this study served to assess the reliability (association strength) of the relation between the emotion terms and the food images.
In addition to the direct mapping between the emotion terms and the valence–arousal space (EmojiGrid), as established through the Label2Grid task, we also established an indirect mapping by combining the results of the Image2Grid task with those of the Image2Label task. This was done by assigning the mean valence and arousal ratings (mean EmojiGrid coordinates) that were reported for a given image in the Image2Grid task to the emotion terms that were assigned to that image in the Image2Label task. In the rest of this paper, we will refer to this indirect mapping of the emotion terms to the EmojiGrid as the Label2Image2Grid mapping.

2.2. Participants

A total of 480 English-speaking participants from the UK (240 female, mean age = 26.1 years, SD = 5.1, range = 18–40 years) were recruited via the Prolific database (https://prolific.ac, accessed on 4 March 2022). The exclusion criterion was (color) vision deficiency.
The experimental protocol was reviewed and approved by the TNO Internal Review Board (approval code: 2019-033, approval date: 10 May 2019). The study was conducted in accordance with the Helsinki Declaration of 1975, as revised in 2013 [39]. Participation in this study was voluntary. The participants received financial compensation for their participation.

2.3. Stimuli

2.3.1. Food Images

Twenty food images (Figure 3) were selected from the Cross-Cultural Food Image Database (CROCUFID [40]). CROCUFID includes high-resolution images of sweet, savory, natural, and processed food from Western and Asian cuisines, photographed according to a standard protocol, so that all of the food items were observed against the same background (a white plate) and from a fixed viewing angle (45°). The 20 images that were used in this study were selected such that their associated mean valence ratings covered the entire valence scale [11,20]. They represent natural food (e.g., fruit and salad), processed food (e.g., cakes and a burger) and rotten or molded food (e.g., rotten pears and molded salad). In the check-all-that-apply (CATA) labeling procedure that was used in this study (see Section 2.6.3) all of the 20 food images were simultaneously presented in a 5 × 4 rectangular grid layout. For half of the participants, the order was left–right and for the remaining participants the order was up–down (i.e., reversed). This was done in order to neutralize any selection biases that could arise when the participants scanned the image matrix in reading order (left to right and top to bottom), paying more attention to the terms at the top of the list than to those at the bottom. The 20 food images are provided with detailed information in the Supplementary File S1.

2.3.2. Emotion Terms

Twenty-five emotion terms were used in this study, all from the EsSense25 lexicon [41] (a reduced version of the EsSense Profile [12]). The EsSense25 is a validated list of food-specific emotion terms that is used to measure self-reported food-evoked emotional associations [15]. In the check-all-that-apply (CATA) labeling procedure that was used in this study (see Section 2.6.3 and Section 2.6.4), the 25 emotion terms of the EsSense25 were presented in alphabetical order to the first half of the participants and in reversed alphabetical order to the second half (see Figure 4). This was done to neutralize any selection bias that could arise when participants scanned the list in reading order, paying more attention to the terms that were at the top of the list than those that were at the bottom [42].

2.4. Measures

2.4.1. Demographics

The participants in this study reported their age and gender.

2.4.2. Valence and Arousal

In accordance with the circumplex model of affect [8], the affective responses that are elicited by food-related stimuli vary mainly over the two principal affective dimensions of valence (i.e., pleasure or displeasure) and arousal (i.e., activation or deactivation). In this study, valence and arousal were measured with the EmojiGrid graphical self-report tool [20]. The EmojiGrid is a square grid that is labeled with emoji that express various degrees of valence and arousal (Figure 1). Users rate their affective appraisal of a given stimulus by pointing and clicking at the location on the grid that best represents their impression in terms of valence and arousal. The EmojiGrid was inspired by Russell’s Affect Grid [29] and was originally developed and validated for the affective appraisal of food stimuli [11,20], since conventional affective self-report tools (e.g., Self-Assessment Manikin [43]) are frequently misunderstood in that context [11,20]. It has since also successfully been used and validated for the affective appraisal of a wide range of different emotional stimuli, such as images [44], sound and video clips [45], touch events [46], odors [47,48,49] and VR experiences [50]. Since it is intuitive and language-independent, the EmojiGrid is also suitable for cross-cultural research [11,51] and research involving children or low-literate participants.

2.5. Data Analysis

The statistical data analyses were conducted using IBM SPSS Statistics 26 for Windows (IBM, New York, USA), R software version 4.1.1 (The R Foundation for Statistical Computing), and the Python programming language version 3.9 (The Python Software Foundation). Descriptive statistics were used in order to calculate (1) the percentage of the emotion terms that were selected for each food image and (2) the percentage of the food images that were selected for each emotion term. The intraclass correlation coefficient (ICC) estimates and their 95% confident intervals were based on a mean-rating (k = 3), absolute agreement, 2-way mixed-effects model [52,53]. ICC values less than 0.5 are indicative of poor reliability, values between 0.5 and 0.75 indicate moderate reliability, values between 0.75 and 0.9 indicate good reliability and values greater than 0.9 indicate excellent reliability [52]. For all of the other analyses, a probability level of p < 0.05 was considered to indicate statistical significance.

2.6. Procedure

Participants took part in an anonymous online survey that was created with the Gorilla Experiment Builder [54]. The survey commenced by presenting general information about the experiment and thanking the participants for their contribution. The participants were informed that during the experiment they would be asked to report their first impressions of 20 food images (e.g., by imagining how consuming the food that was shown in the images would make them feel) and 25 food-related words. It was emphasized that there were no correct or incorrect answers and that it was important to respond seriously. Subsequently, the participants signed a digital informed consent, affirming that they were at least 18 years old and voluntarily participating in the study. The survey then continued with an assessment of the demographics (age and gender) of the participants. The main body of the survey consisted of four tasks that were performed in a fixed order (see Figure 5). Two of these tasks were labeling tasks (the blue arrows in Figure 2) in which the participants assigned the EsSense25 terms to the food images and to the EmojiGrid. The other two tasks were matching tasks (the red arrows in Figure 2) in which the participants matched the food images to the EsSense25 terms and to the EmojiGrid. These four experimental tasks are described in further detail in the next four subsections. The participants received visual feedback about their progress through the experiment via a blue progress bar in the lower part of the screen. They could take a short break between the tasks. To assess the seriousness of the participation, we included a validated seriousness check at the end of the experiment (asking the participants if they had answered seriously, per [55]). The average duration of the experiment was about 15 min.

2.6.1. Task I: Image2Grid

In the first task, Image2Grid, each trial showed a randomly selected food image (from the total set of 20 stimuli) next to the EmojiGrid (Figure 6). The participants were asked to report how each image made them feel by using the EmojiGrid. Clicking on the EmojiGrid initiated the next trial. The participants first read a brief explanation about the use of the EmojiGrid response tool. Then they performed two practice trials (using two food images that were not included in the stimulus set) in order to familiarize themselves with the use of this tool. Immediately after these practice trials, the actual rating experiment started. For each of the 20 trials, a different food image was presented and the participants reported their affective appraisal of the food that was shown by clicking on the EmojiGrid. The task was self-paced.

2.6.2. Task II: Image2Label

In the second task, Image2Label, each trial showed a randomly selected food image next to all of the EsSense25 terms (Figure 7). For each food image, the participants were asked to click on all of the terms that best described how the image made them feel (a CATA procedure). When selected, the EsSense25 terms became highlighted. After selecting all of the terms that they considered to apply to the image that was shown, the participants could start the next trial by clicking on a “next” button. The participants performed two practice trials (using two food images that were not included in the stimulus set) in order to familiarize themselves with the EsSense25 terms and the procedure. Immediately after these practice trials, they performed 20 experimental trials. On each of these trials, a different food image was presented and the participants clicked on the emotion terms that in their opinion best described how that image made them feel. The task was self-paced.

2.6.3. Task III: Label2Image

In the third task, Label2Image, each trial showed a randomly selected EsSense25 term next to all 20 of the food images (Figure 8). On each of the 25 trials, the participants were asked to click on all of the food images to which the current emotion term applied (i.e., a CATA procedure). The food images that were selected became highlighted. After selecting all of the relevant images, the participant could start the next trial by clicking on a “next” button. The task was self-paced. Note that the Label2Image task yields a mapping between images and emotion terms that is the inverse of the mapping that results from the Image2Label task. As mentioned in the Introduction, the rationale for including this task is to investigate the reliability (the association strength) of the mapping between the images and emotion terms.

2.6.4. Task IV: Label2Grid

In the fourth task, Label2Grid, each trial showed a randomly selected EsSense25 term next to the EmojiGrid (Figure 9). On each of the 25 trials, the participants were asked where they would click on the EmojiGrid in order to respond that a given food would make them feel like the term shown. Clicking on the EmojiGrid started the next trial. The task was self-paced.

3. Results

In response to the seriousness check, all of the participants reported that they had answered all of the questions seriously. No participants were excluded from the analysis.

3.1. Task I: Image2Grid

In order to evaluate the face validity of the valence and arousal ratings that were collected for the food images, we ordered the food images based on their mean valence ratings (from low to high valence). As expected, Table 1 shows that the highest mean valence ratings were obtained for the images of fresh fruit (the apple, orange and strawberries) and pastries, while neutral ratings were obtained for images of boiled eggs and salads and the lowest mean valence ratings correspond to images of molded food (the molded salads, banana and pear).
To quantify the agreement between the mean valence and arousal ratings that were obtained in the present study and those that were reported previously by Kaneko et al. [11], we computed the intraclass correlation coefficients (ICC) with their 95% confidence intervals for the mean valence and arousal ratings that were obtained in both studies. The ICC value for valence is 0.995 [0.988–0.998] while the ICC for arousal is 0.980 [0.949–0.992], indicating that the mean valence and arousal values that were measured in both studies are in excellent agreement.

3.2. Task II: Image2Label

Figure 10 (filled diamonds) shows the percentage of the participants that linked each image to each of the different EsSense25 terms in the Image2Label task. The emotion terms that are predominantly related to pleasure (good, pleasant and happy) and displeasure (worried and disgusted) were the most frequently used. The emotion terms expressing different degrees of (de-)activation (arousal) were also used throughout this study (e.g., calm, bored, interested and enthusiastic), indicating their relevance for characterizing food-related experiences. As expected, the food images that were overall rated low on valence were most frequently labeled with negative terms (e.g., aggressive, worried and disgusted), while the images that were overall rated high on valence were most frequently labeled with positive terms (e.g., happy, pleasant and good). The items that were rated near-neutral on valence (the boiled eggs, salads and cucumber) were frequently labeled with neutral terms (e.g., bored or mild).

3.3. Task III: Label2Image

Figure 10 (open diamonds) shows the percentage of the participants that labeled each food image with each of the EsSense25 terms in the Label2Image task. The results are highly similar to those of the Image2Label task: the images that were overall rated low on valence were most frequently labeled with negative terms (e.g., aggressive, worried and disgusted), while the images that were overall rated high on valence were most frequently labeled with positive terms (e.g., happy, pleasant and good). Overall, the emotion terms were used more frequently to label the images that were presented in the Label2Image task than those which were presented in the Image2Label task (i.e., the open symbols typically have a larger area than the filled symbols).
In order to investigate the reliability (association strength) of the associations between the emotion terms and food images, we computed the Pearson correlation between the image-to-term assignment frequencies that were obtained from the Image2Label task and the term-to-image assignment frequencies that were obtained from Label2Grid task. The Pearson correlation coefficient was 0.84 (with a 95% CI of [0.81, 0.86]), indicating that the mapping was highly reliable.

3.4. Task IV: Label2Grid

Figure 11A illustrates the distribution of the responses that were made by the participants when they were matching the emotion terms directly to the EmojiGrid in the Label2Grid task. Here the data is only shown for the terms disgusted, happy, guilty and understanding. The distributions for the remaining emotion terms are provided in the Supplementary File S2.
By combining the results from the Label2Image and Image2Grid tasks, it was also possible to establish an indirect mapping of the emotion terms to the valence–arousal space. This was done by computing, for each emotion term, the average valence and arousal values over all of the images (as were determined in the Image2Grid task) to which this term had been assigned (in Label2Image task). Figure 11B shows that, for most of the emotion terms, indirectly mapping to the valence–arousal space via the food images yielded a spatial distribution of the responses that is similar to the distribution that resulted from the direct mapping. The white crosses in Figure 11 represent the group mean arousal and valence ratings. Each row in Figure 11 represents the results for the four different words (disgusted, happy, guilty and understanding).
Two tailed t-tests were conducted in order to examine whether the mean valence and mean arousal ratings for each emotion term were significantly different between the indirect (Label2Image2Grid) and direct (Label2Grid) mappings. Table 2 illustrates an overview of these analyses. The significant effects are shown in bold font for illustrative purposes.
The Wilcoxon signed-rank tests yielded a significant difference between the indirect (Label2Image2Grid) and the direct (Label2Grid) mappings for most of the terms (Table 2). In fact, for all of the terms that were used we found a significant difference between either the mean valence or the mean arousal ratings. Hence, the valence and arousal ratings that the participants assigned to the emotion terms were not consistent with their valence and arousal ratings for the food images to which they assigned these terms. In general, the emotion terms were rated more extremely on valence and arousal than the food images. This is also clear from Figure 11B (the direct mapping) where the response distributions are closer to the edges of the valence–arousal space than the ratings that were indirectly obtained via the food images (see the Supplementary File S2).
Figure 12 shows the average locations of the emotion terms in the valence–arousal space, determined both through direct (the red squares) and indirect (the blue circles) mapping. This figure shows that the directly mapped terms are overall located further towards the periphery of the valence–arousal space, while the indirectly mapped terms are located more centrally. Thus, it appears that the affective strength (experienced intensity) of all of the terms is more extreme when it is measured by direct mapping than when it is measured by indirect mapping. Interestingly, the average location of the term guilty resulting from the indirect (Label2Image2Grid) mapping is the opposite of its position resulting from the direct (Label2Grid) mapping.
Figure 12 also shows the superposition of the circumplex model of human core affect by Yik et al. [27] over the two-dimensional valence–arousal space. This model defines 12 domains (numbered from 1 to 12, following [34]) that represent (a) the poles of the two core dimensions (“pleasure–displeasure” and “activation–deactivation”) and (b) eight emotional domains that are defined as a combination of both of the core dimensions. For convenience, these domains are taken to be of equal angular extent (30°), although this is not required for a circumplex model [27]. Using a questionnaire-based approach, Jaeger et al. [34] established the linkages between food-elicited emotional associations (the EsSense Profile) and sensory characteristics by mapping both to the 12 domains of the circumplex model through correspondence analysis. To compare our present results with those of Jaeger et al. [34] we computed, for each emotion term, its domain-membership function as one plus the radial angle of the term’s location on the EmojiGrid divided by the width of a domain (30°), where the radial angle increases clockwise, starting at 0 for the activation dimension. This definition of the membership function results in fractional values, with fractions smaller (or larger) than 0.5, indicating that the emotion term has aspects in common with the previous (next) domain. Note that the membership function that was used by Jaeger et al. [34] used only multiples of 0.5. Table 3 lists the mapping between the EsSense25 emotion terms and the 12 domains of the core affect model that was presented by Yik et al. [27], from the study by Jaeger et al. [34] and obtained in this study through both direct mapping (Label2Grid) and indirect mapping (Image2Label2Grid). The Pearson correlation between the direct and indirect mapping of emotion terms to the domains of the circumplex model is 0.96, indicating that the nature of the emotional appraisals remains constant between the indirect and direct mappings while only their intensity varies (with ratings for the direct mapping being more extreme). The Pearson correlation between the indirect mapping that was obtained in this study and the mapping that was reported by Jaeger et al. [34] is 0.81, indicating a strong agreement between both results. The largest differences between both studies are found for the terms active and guilty. While active was mapped to the ‘pleasant activation’ domain by Jaeger et al. [34] it was mapped to the ‘pleasure’ domain in the current study. Guilty was mapped to the ‘activated displeasure’ domain in the study by Jaeger et al. [34] and to the ‘activated pleasure’ domain in this study, being even the opposite on the valence dimension.
As observed by Jaeger et al. [34] and acknowledged by Meiselman [56], Figure 12 also shows that the EsSense25 is quite unbalanced and lacks emotion terms with a negative valence (e.g., a domain like ‘unpleasant activation’ is not represented).

4. Discussion

In this study we established a link between a categorial (lexicon-based) tool (the EsSense25) and a dimensional (valence and arousal-based) tool (the EmojiGrid) in order to assess food-related emotions. To establish a mapping between the 25 emotion terms of the EsSense25 and the set of 20 food images, the participants labeled each food image with a subset of the emotion terms (Label2Image task) and mapped both the food images and the emotion terms to the valence–arousal space (the Image2Grid and Label2Grid tasks, both using the EmojiGrid). By combining (1) the mapping between the emotion terms and the food images with (2) the mapping of the food images to the valence–arousal space, we also established (3) an indirect (via the images, Label2Image2Grid) mapping of the emotion terms to the valence–arousal space.
The valence and arousal ratings for the food images show good face validity: the highest mean valence ratings were obtained for the images of fresh fruit and pastries, while neutral ratings were obtained for the images of neutral foods and the lowest mean valence ratings correspond to the images of molded food.
The linkages between the terms and images have straightforward and meaningful interpretations: the food images that were overall rated low, near-neutral or high on valence were most frequently labeled with negative, neutral or positive emotion terms, respectively.
Although the relationship between the images and emotions terms was quite reliable (in the sense that it was a two-way mapping), the emotion terms were used more frequently to label the images in the Label2Image task than in the Image2Label task. This may be because the participants were more inclined to apply a given term to images in the Label2Image task, whereas they were less inclined to select the same term in the Image2Label task when they felt that it was less appropriate to characterize the image under consideration.
Note that the differences between the outcomes of the labeling and matching tasks may partly result from differences in terms of cognitive flow. Matching tasks that limit analytical thinking may tap more directly into the unconscious processes that are involved in food choice than the more cognitively demanding labeling tasks. The labeling tasks (Label2Image and Label2Grid) that were used in this study may be cognitively demanding since they required the participants to first imagine how a given (abstract) emotion term feels and then to either (a) select the foods (images) that seem capable to elicit that feeling (Label2Image) or (b) report the combination of valence and arousal that they associate with that term (Label2Image). The matching tasks (Image2Label and Image2Grid) may be cognitively less demanding than the labeling tasks since they required the participants to first imagine how consuming the food that is shown makes them feel (an intuitive response) and then to either (a) select the most appropriate labels to describe that feeling or (b) indicate their affective response on the EmojiGrid (intuitive graphical self-report tool). However, since cognitive flow is not a central topic of this study, we did not further investigate this issue here.
The valence and arousal values that were assigned to the emotion terms through indirect mapping to the valence–arousal space were typically less extreme than those that were assigned through direct mapping. Thus, the participants who imagined how a given emotion term felt in response to a given food (Label2Grid task) rated their feelings more extreme (intense) in terms of valence and arousal than the participants who imagined how the food that was shown in an image would make them feel (Image2Grid task). This may reflect the subjective nature of food perception: while all of the participants had a uniform notion of the emotion terms (e.g., ‘happy’), they differed in their appreciation of the food items that were represented in the images (e.g., strawberries can make someone ‘happy’, but most likely not everybody), resulting in a regression to the mean.
The indirect mapping that was obtained in this study shows a good overall agreement with the mapping that was reported by Jaeger et al. [34]. An interesting difference between both of the studies is the term guilty, which was mapped to the ‘activated displeasure’ domain in the study by Jaeger et al. [34] and to the ‘activated pleasure’ domain in this study. Figure 10 shows that the term guilty was most frequently associated with pastries, cookies, and the burger, items that are also most frequently associated with high-positive valence terms like happy, pleasant, and good. This result agrees with the observation that there is typically a cognitive association between guilt and hedonic pleasure [57]. These contrasting feelings, sometimes characterized as “guilty pleasures”, often coexist when we give in to a certain behavior (e.g., eating an appealing yet unhealthy food) that is known to have positive short-term but negative long-term consequences [58].
Single response tools that are based on a dimensional model of human core affect may be less appropriate when seeking detailed profiles of product-elicited emotional associations [34]. It has therefore been suggested to combine such tools with a CATA [59] or RATA [60] procedure. The results of this study suggest an implementation of the EmojiGrid that is similar to the Two-Dimensional Affect and Feeling Space (2DAFS [6]), where a core affect rating phase, in which items are rated on valence and arousal by clicking on a grid, is followed by an emotion-categorization phase in which only those emotion terms that are linked most strongly to the indicated position in the valence–arousal space are explicitly presented and selected (in a CATA paradigm) or rated (in a RATA paradigm) in order to increase sample discrimination. Note that an implementation of this kind could also be used to relate (translate) the responses from users from different cultures or language groups.

4.1. Limitations

The current study also has some limitations.
The CATA procedures that were used in the Label2Image and Image2Label tasks did not yield any information about the strength of the relations that are measured. Replacing the CATA by the RATA (rate-all-that apply) procedures could provide more insight into the degree to which the terms and images are related.
The EsSense Profile is purposefully dominated by emotion words with positive meanings in order to reflect the generally positive responses to commercial foods and beverages [12]. As a result, it is only sparsely populated with terms with more negative meanings. To achieve a denser coverage of the valence–arousal space with emotion words (especially on the low valence and low arousal sides), future studies could use lexicons that provide a more balanced list of positive and negative emotions [16,19].
Since the mapping between the images and the emotion lexicon (EsSense25) terms was derived via the mapping between the images and the valence–arousal space, it is not possible to distinguish between affective states with similar experimental features (e.g., similar valence and arousal).
In this study the mapping between the emotion terms and the valence–arousal space was only derived for UK participants and for a limited set of food images and emotion terms. Future studies should investigate a larger diversity in food images and more appropriate emotions terms. Other cultures may yield different mappings between images and words. Hence, the results may not extrapolate to other groups, different images or different terms.

4.2. Future Research

Future studies may also investigate the mapping between emotion terms and the valence–arousal space through experiments in which food or beverages are actually tasted or consumed instead of merely visually perceived. Affective appraisal of food images is a ‘cold’ cognitive evaluation process that is based on criteria reflecting personal experiences and relevance [61]. Previous research has shown that viewing food pictures activates brain areas that code how the food that is perceived tastes (the insula/operculum) and how rewarding it would be to eat it (the orbitofrontal cortex; [62,63]). Hence, people can reliably produce an affect rating without actually tasting the food that is shown. However, while the results from a tasting experiment will most likely agree with our present results for the valence dimension (which are typically quite stable and consensual across experimental paradigms [61]), they may differ for the arousal dimension as a result of the sensory characteristics of the sample and a higher degree of personal relevance when it is tasted or consumed [64].

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/foods11070972/s1, Supplementary File S1: 20: food images and stimuli. Supplementary File S2: Word Mapping Distributions

Author Contributions

Conceptualization, A.T., A.-M.B. and J.B.F.V.E.; Data curation, A.T., E.V.d.B. and T.J.V.d.B.; Formal analysis, A.T., E.V.d.B., T.J.V.d.B., A.-M.B. and J.B.F.V.E.; Funding acquisition, D.K.; Investigation, A.T., D.K. and J.B.F.V.E.; Methodology, A.T., E.V.d.B., T.J.V.d.B., D.K. and A.-M.B.; Resources, D.K.; Software, A.T.; Supervision, A.T.; Visualization, A.T., E.V.d.B. and T.J.V.d.B.; Writing—original draft, A.T. and E.V.d.B.; Writing—review & editing, A.T., E.V.d.B., T.J.V.d.B., D.K., A.-M.B. and J.B.F.V.E. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by Kikkoman Europe R&D Laboratory B.V.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Netherlands Organization for Applied Scientific Research (TNO) Institutional Review Board (approval code: 2019-033, approval date: 10-05-2019).

Informed Consent Statement

Informed consent was obtained from all of the subjects who were involved in this study.

Data Availability Statement

Data is contained within the article and supplementary materials.

Conflicts of Interest

This study was funded by Kikkoman Europe R&D Laboratory B.V. Daisuke Kaneko is employed by Kikkoman Europe R&D Laboratory B.V. and by research organization TNO. By funding this study, Kikkoman company supports research into novel methods of measuring food experience. Daisuke Kaneko reports no potential conflicts with the study. All other authors declare no conflict of interest.

References

  1. Gutjar, S.; de Graaf, C.; Kooijman, V.; de Wijk, R.A.; Nys, A.; ter Horst, G.J.; Jager, G. The role of emotions in food choice and liking. Food Res. Int. 2015, 76, 216–223. [Google Scholar] [CrossRef]
  2. Dalenberg, J.R.; Gutjar, S.; ter Horst, G.J.; de Graaf, K.; Renken, R.J.; Jager, G. Evoked emotions predict food choice. PLoS ONE 2014, 9, e115388. [Google Scholar] [CrossRef] [PubMed]
  3. Wichchukit, S.; O’Mahony, M. ‘Liking’, ‘Buying’, ‘Choosing’ and ‘Take Away’ preference tests for varying degrees of hedonic disparity. Food Qual. Prefer. 2011, 22, 60–65. [Google Scholar] [CrossRef]
  4. Wichchukit, S.; O’Mahony, M. Paired preference tests: ‘Liking’, ‘Buying’ and ‘Take Away’ preferences. Food Qual. Prefer. 2010, 21, 925–929. [Google Scholar] [CrossRef]
  5. Coppin, G.; Sander, D. Theoretical approaches to emotion and its measurement. In Emotion Measurement; Meiselman, H.L., Ed.; Woodhead Publishing: Cambridge, UK, 2016; pp. 3–30. [Google Scholar]
  6. Lorette, P. Investigating emotion perception via the Two-Dimensional Affect and Feeling Space: An example of a cross-cultural study among Chinese and non-Chinese participants. Front. Psychol. 2021, 12, 1–14. [Google Scholar] [CrossRef]
  7. Fox, E. Perspectives from affective science on understanding the nature of emotion. Brain Neurosci. Adv. 2018, 2, 1–8. [Google Scholar] [CrossRef] [Green Version]
  8. Russell, J.A. A circumplex model of affect. J. Personal. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
  9. Ekman, P. An argument for basic emotions. Cogn. Emot. 1992, 6, 169–200. [Google Scholar] [CrossRef]
  10. Barrett, L.F. The theory of constructed emotion: An active inference account of interoception and categorization. Soc. Cogn. Affect. Neurosci. 2017, 12, 1–23. [Google Scholar] [CrossRef]
  11. Kaneko, D.; Toet, A.; Ushiama, S.; Brouwer, A.M.; Kallen, V.; van Erp, J.B.F. EmojiGrid: A 2D pictorial scale for cross-cultural emotion assessment of negatively and positively valenced food. Food Res. Int. 2018, 115, 541–551. [Google Scholar] [CrossRef]
  12. King, S.C.; Meiselman, H.L. Development of a method to measure consumer emotions associated with foods. Food Qual. Prefer. 2010, 21, 168–177. [Google Scholar] [CrossRef]
  13. Gutjar, S.; Dalenberg, J.R.; de Graaf, C.; de Wijk, R.; Palascha, A.; Renken, R.J.; Jager, G. What reported food-evoked emotions may add: A model to predict consumer food choice. Food Qual. Prefer. 2015, 45, 140–148. [Google Scholar] [CrossRef]
  14. King, S.C.; Meiselman, H.L.; Carr, B.T. Measuring emotions associated with foods in consumer testing. Food Qual. Prefer. 2010, 21, 1114–1116. [Google Scholar] [CrossRef]
  15. Jaeger, S.R.; Swaney-Stueve, M.; Chheang, S.L.; Hunter, D.C.; Pineau, B.; Ares, G. An assessment of the CATA-variant of the EsSense Profile®. Food Qual. Prefer. 2018, 68, 360–370. [Google Scholar] [CrossRef]
  16. Ng, M.; Chaya, C.; Hort, J. Beyond liking: Comparing the measurement of emotional response using EsSense Profile and consumer defined check-all-that-apply methodologies. Food Qual. Prefer. 2013, 28, 193–205. [Google Scholar] [CrossRef]
  17. Köster, E.P.; Mojet, J. From mood to food and from food to mood: A psychological perspective on the measurement of food-related emotions in consumer research. Food Res. Int. 2015, 76, 180–191. [Google Scholar] [CrossRef]
  18. Scherer, K.R. What are emotions? And how can they be measured? Soc. Sci. Inf. 2005, 44, 695–729. [Google Scholar] [CrossRef]
  19. Jaeger, S.R.; Cardello, A.V.; Schutz, H.G. Emotion questionnaires: A consumer-centric perspective. Food Qual. Prefer. 2013, 30, 229–241. [Google Scholar] [CrossRef]
  20. Toet, A.; Kaneko, D.; Ushiama, S.; Hoving, S.; de Kruijf, I.; Brouwer, A.-M.; Kallen, V.; van Erp, J.B.F. EmojiGrid: A 2D pictorial scale for the assessment of food elicited emotions. Front. Psychol. 2018, 9, 2396. [Google Scholar] [CrossRef] [Green Version]
  21. Vidal, L.; Ares, G.; Jaeger, S.R. Use of emoticon and emoji in tweets for food-related emotional expression. Food Qual. Prefer. 2016, 49, 119–128. [Google Scholar] [CrossRef]
  22. Ares, G.; Jaeger, S.R. A comparison of five methodological variants of emoji questionnaires for measuring product elicited emotional associations: An application with seafood among Chinese consumers. Food Res. Int. 2017, 99, 216–228. [Google Scholar] [CrossRef] [PubMed]
  23. Gallo, K.E.; Swaney-Stueve, M.; Chambers, D.H. A focus group approach to understanding food-related emotions with children using words and emojis. J. Sens. Stud. 2017, 32, e12264. [Google Scholar] [CrossRef]
  24. Schouteten, J.J.; Verwaeren, J.; Lagast, S.; Gellynck, X.; De Steur, H. Emoji as a tool for measuring children’s emotions when tasting food. Food Qual. Prefer. 2018, 68, 322–331. [Google Scholar] [CrossRef]
  25. Schouteten, J.J.; Verwaeren, J.; Gellynck, X.; Almli, V.L. Comparing a standardized to a product-specific emoji list for evaluating food products by children. Food Qual. Prefer. 2019, 72, 86–97. [Google Scholar] [CrossRef]
  26. Pinto, V.R.A.; Teixeira, C.G.; Lima, T.S.; De Almeida Prata, E.R.B.; Vidigal, M.C.T.R.; Martins, E.; Perrone, Í.T.; Carvalho, A.F.d. Health beliefs towards kefir correlate with emotion and attitude: A study using an emoji scale in Brazil. Food Res. Int. 2020, 129, 108833. [Google Scholar] [CrossRef]
  27. Yik, M.; Russell, J.A.; Steiger, J.H. A 12-point circumplex structure of core affect. Emotion 2011, 11, 705–731. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Woodward, H.E.; Treat, T.A.; Cameron, C.D.; Yegorova, V. Valence and arousal-based affective evaluations of foods. Eat. Behav. 2017, 24, 26–33. [Google Scholar] [CrossRef]
  29. Russell, J.A.; Weiss, A.; Mendelson, G.A. Affect grid: A single-item scale of pleasure and arousal. J. Personal. Soc. Psychol. 1989, 57, 493–502. [Google Scholar] [CrossRef]
  30. Lieberman, M.D.; Eisenberger, N.I.; Crockett, M.J.; Tom, S.M.; Pfeifer, J.H.; Way, B.M. Putting feelings into words. Affect labeling disrupts amygdala activity in response to affective stimuli. Psychol. Sci. 2007, 18, 421–428. [Google Scholar] [CrossRef]
  31. Russell, J.A.; Feldman Barrett, L. Core affect, prototypical emotional episodes, and other things called emotion: Dissecting the elephant. J. Personal. Soc. Psychol. 1999, 76, 805–819. [Google Scholar] [CrossRef]
  32. Scherer, K.R.; Shuman, V.; Fontaine, J.R.; Soriano, C. The GRID meets the Wheel: Assessing emotional feeling via self-report. In Components of Emotional Meaning: A Sourcebook; Fontaine, J.R.J., Scherer, K.R., Soriano, C., Eds.; Oxford University Press: Oxford, UK, 2013; pp. 281–298. [Google Scholar]
  33. Ekkekakis, P.; Petruzzello, S.J. Analysis of the affect measurement conundrum in exercise psychology: IV. A conceptual case for the affect circumplex. Psychol. Sport Exerc. 2002, 3, 35–63. [Google Scholar] [CrossRef]
  34. Jaeger, S.R.; Spinelli, S.; Ares, G.; Monteleone, E. Linking product-elicited emotional associations and sensory perceptions through a circumplex model based on valence and arousal: Five consumer studies. Food Res. Int. 2018, 109, 626–640. [Google Scholar] [CrossRef]
  35. Kuppens, P.; Tuerlinckx, F.; Russell, J.A.; Barrett, L.F. The relation between valence and arousal in subjective experience. Psychol. Bull. 2013, 139, 917–940. [Google Scholar] [CrossRef]
  36. Whissell, C. Using the revised Dictionary of Affect in Language to quantify the emotional undertones of samples of natural language. Psychol. Rep. 2009, 105, 509–521. [Google Scholar] [CrossRef]
  37. Warriner, A.B.; Kuperman, V.; Brysbaert, M. Norms of valence, arousal, and dominance for 13,915 English lemmas. Behav. Res. Methods 2013, 45, 1191–1207. [Google Scholar] [CrossRef] [Green Version]
  38. Whissell, C.M. Chapter 5—The Dictionary of Affect in Language. In The Measurement of Emotions; Plutchik, R., Kellerman, H., Eds.; Academic Press: New York, NY, USA, 1989; pp. 113–131. [Google Scholar]
  39. World Medical Association. World Medical Association declaration of Helsinki: Ethical principles for medical research involving human subjects. J. Am. Med. Assoc. 2013, 310, 2191–2194. [Google Scholar] [CrossRef] [Green Version]
  40. Toet, A.; Kaneko, D.; de Kruijf, I.; Ushiama, S.; van Schaik, M.G.; Brouwer, A.-M.; Kallen, V.; van Erp, J.B.F. CROCUFID: A cross-cultural food image database for research on food elicited affective responses. Front. Psychol. 2019, 10, 58. [Google Scholar] [CrossRef]
  41. Nestrud, M.A.; Meiselman, H.L.; King, S.C.; Lesher, L.L.; Cardello, A.V. Development of EsSense25, a shorter version of the EsSense Profile. Food Qual. Prefer. 2016, 48, 107–117. [Google Scholar] [CrossRef]
  42. Ares, G.; Antúnez, L.; Giménez, A.; Jaeger, S.R. List length has little impact on consumers’ visual attention to CATA questions. Food Qual. Prefer. 2015, 42, 100–109. [Google Scholar] [CrossRef]
  43. Bradley, M.M.; Lang, P.J. Measuring emotion: The Self-Assessment Manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
  44. Toet, A.; Van Erp, J.B.F. The EmojiGrid as a tool to assess experienced and perceived emotions. Psych 2019, 1, 469–481. [Google Scholar] [CrossRef] [Green Version]
  45. Toet, A.; van Erp, J.B.F. Affective rating of audio and video clips using the EmojiGrid [version 2; peer review: 2 approved]. F1000Research 2021, 9, 1–21. [Google Scholar] [CrossRef]
  46. Toet, A.; van Erp, J.B.F. The EmojiGrid as a rating tool for the affective appraisal of touch. PLoS ONE 2020, 15, e0237873. [Google Scholar] [CrossRef]
  47. Liu, Y.; Toet, A.; Krone, T.; van Stokkum, R.; Eijsman, S.; van Erp, J.B.F. A network model of affective odor perception. PLoS ONE 2020, 15, e0236468. [Google Scholar] [CrossRef]
  48. Toet, A.; Eijsman, S.; Liu, Y.; Donker, S.; Kaneko, D.; Brouwer, A.-M.; van Erp, J.B.F. The relation between valence and arousal in subjective odor experience. Chemosens. Percept. 2019, 13, 141–151. [Google Scholar] [CrossRef]
  49. Van der Burg, E.; Toet, A.; Brouwer, A.-M.; van Erp, J.B.F. Sequential effects in odor perception. Chemosens. Percept. 2021. Online first. [Google Scholar] [CrossRef]
  50. Toet, A.; Heijn, F.; Brouwer, A.-M.; Mioch, T.; van Erp, J.B.F. The EmojiGrid as an immersive self-report tool for the affective assessment of 360 VR videos. In Proceedings of the EuroVR 2019: Virtual Reality and Augmented Reality, Tallinn, Estonia, 23–25 October 2019; pp. 330–335. [Google Scholar]
  51. Kaneko, D.; Stuldreher, I.; Reuten, A.J.C.; Toet, A.; van Erp, J.B.F.; Brouwer, A.-M. Comparing explicit and implicit measures for assessing cross-cultural food experience. Front. Neuroergonomics 2021, 2, 1–16. [Google Scholar] [CrossRef]
  52. Koo, T.K.; Li, M.Y. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J. Chiropr. Med. 2016, 15, 155–163. [Google Scholar] [CrossRef] [Green Version]
  53. Shrout, P.E.; Fleiss, J.L. Intraclass correlations: Uses in assessing rater reliability. Psychol. Bull. 1979, 86, 420–428. [Google Scholar] [CrossRef] [PubMed]
  54. Anwyl-Irvine, A.; Massonnié, J.; Flitton, A.; Kirkham, N.; Evershed, J. Gorilla in our Midst: An online behavioral experiment builder. bioRxiv 2019, 438242. [Google Scholar] [CrossRef] [Green Version]
  55. Aust, F.; Diedenhofen, B.; Ullrich, S.; Musch, J. Seriousness checks are useful to improve data validity in online research. Behav. Res. Methods 2013, 45, 527–535. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Meiselman, H.L. A review of the current state of emotion research in product development. Food Res. Int. 2015, 76 Pt 2, 192–199. [Google Scholar] [CrossRef]
  57. Goldsmith, K.; Cho, E.K.; Dhar, R. When guilt begets pleasure: The positive effect of a negative emotion. J. Mark. Res. 2012, 49, 872–881. [Google Scholar] [CrossRef] [Green Version]
  58. Giner-Sorolla, R. Guilty pleasures and grim necessities: Affective attitudes in dilemmas of self-control. J. Personal. Soc. Psychol. 2001, 80, 206–221. [Google Scholar] [CrossRef]
  59. Jaeger, S.R.; Roigard, C.M.; Chheang, S.L. The valence × arousal circumplex-inspired emotion questionnaire (CEQ): Effect of response format and question layout. Food Qual. Prefer. 2021, 90, 104172. [Google Scholar] [CrossRef]
  60. Jaeger, S.R.; Roigard, C.M.; Jin, D.; Xia, Y.; Zhong, F.; Hedderley, D.I. A single-response emotion word questionnaire for measuring product-related emotional associations inspired by a circumplex model of core affect: Method characterisation with an applied focus. Food Qual. Prefer. 2020, 83, 103805. [Google Scholar] [CrossRef]
  61. Scherer, K.; Dan, E.; Flykt, A. What determines a feeling’s position in affective space? A case for appraisal. Cogn. Emot. 2006, 20, 92–113. [Google Scholar] [CrossRef]
  62. Simmons, W.K.; Martin, A.; Barsalou, L.W. Pictures of appetizing foods activate gustatory cortices for taste and reward. Cereb. Cortex 2005, 15, 1602–1608. [Google Scholar] [CrossRef]
  63. Avery, J.A.; Liu, A.G.; Ingeholm, J.E.; Gotts, S.J.; Martin, A. Viewing images of foods evokes taste quality-specific activity in gustatory insular cortex. Proc. Natl. Acad. Sci. USA 2021, 118, e2010932118. [Google Scholar] [CrossRef]
  64. Verastegui-Tena, L.; Schulte-Holierhoek, A.; van Trijp, H.; Piqueras-Fiszman, B. Beyond expectations: The responses of the autonomic nervous system to visual food cues. Physiol. Behav. 2017, 179, 478–486. [Google Scholar] [CrossRef]
Figure 1. The EmojiGrid (from [20], see also https://en.wikipedia.org/wiki/EmojiGrid, (accessed on 4 March 2022). The x-axis represents the valence rating, whereas the y-axis represents the arousal rating, both on a scale from 0–100.
Figure 1. The EmojiGrid (from [20], see also https://en.wikipedia.org/wiki/EmojiGrid, (accessed on 4 March 2022). The x-axis represents the valence rating, whereas the y-axis represents the arousal rating, both on a scale from 0–100.
Foods 11 00972 g001
Figure 2. The four different tasks (represented by the arrows) that were investigated in this study. Red arrows: matching tasks. Blue arrows: labeling tasks. Image2Grid task: matching food images to the EmojiGrid. Image2Label task: matching food images to emotion terms. Label2Image task: labeling food images with emotion terms. Label2Grid task: labeling the EmojiGrid with emotion terms.
Figure 2. The four different tasks (represented by the arrows) that were investigated in this study. Red arrows: matching tasks. Blue arrows: labeling tasks. Image2Grid task: matching food images to the EmojiGrid. Image2Label task: matching food images to emotion terms. Label2Image task: labeling food images with emotion terms. Label2Grid task: labeling the EmojiGrid with emotion terms.
Foods 11 00972 g002
Figure 3. The set of 20 food images used as stimuli in this study (selected from the CROCUFID database [40]).
Figure 3. The set of 20 food images used as stimuli in this study (selected from the CROCUFID database [40]).
Foods 11 00972 g003
Figure 4. Screenshots of the EsSense25 word list in (a) alphabetical and (b) reversed order (from [41]).
Figure 4. Screenshots of the EsSense25 word list in (a) alphabetical and (b) reversed order (from [41]).
Foods 11 00972 g004
Figure 5. Schematic representation of the experimental procedure.
Figure 5. Schematic representation of the experimental procedure.
Foods 11 00972 g005
Figure 6. Screen layout of the Image2Grid task: mapping food images to the EmojiGrid. A blue progress bar indicated the progression of the task.
Figure 6. Screen layout of the Image2Grid task: mapping food images to the EmojiGrid. A blue progress bar indicated the progression of the task.
Foods 11 00972 g006
Figure 7. Screen layout of the Image2Label task: mapping food images to EsSense25 terms. Selected terms became highlighted in yellow. In this example the participant responded “active”, “good” and “happy”. A blue progress bar indicated the progression of the task.
Figure 7. Screen layout of the Image2Label task: mapping food images to EsSense25 terms. Selected terms became highlighted in yellow. In this example the participant responded “active”, “good” and “happy”. A blue progress bar indicated the progression of the task.
Foods 11 00972 g007
Figure 8. Screen layout of the Label2Image task: labeling food images with emotion terms (in this example the term “disgusted”). Images with a yellow background were selected by the participant. A blue progress bar indicated the progression of the task.
Figure 8. Screen layout of the Label2Image task: labeling food images with emotion terms (in this example the term “disgusted”). Images with a yellow background were selected by the participant. A blue progress bar indicated the progression of the task.
Foods 11 00972 g008
Figure 9. Screen layout of the Label2Grid task: mapping EsSense25 terms (in this example, the term “guilty”) to the EmojiGrid.
Figure 9. Screen layout of the Label2Grid task: mapping EsSense25 terms (in this example, the term “guilty”) to the EmojiGrid.
Foods 11 00972 g009
Figure 10. Percentage (represented by symbol size, ranging between 0% and 100%) of participants (n = 480) that linked each image to a subset of the EsSense25 terms (Image2Label task, filled diamonds), and each EsSense25 term to a subset of the food images (Label2Image task, open diamonds). Emotions terms and food images are arranged in increasing average valence order along the horizontal and vertical axes.
Figure 10. Percentage (represented by symbol size, ranging between 0% and 100%) of participants (n = 480) that linked each image to a subset of the EsSense25 terms (Image2Label task, filled diamonds), and each EsSense25 term to a subset of the food images (Label2Image task, open diamonds). Emotions terms and food images are arranged in increasing average valence order along the horizontal and vertical axes.
Foods 11 00972 g010
Figure 11. Distribution of the mapping responses over the valence–arousal space, for four different words (disgusted, happy, guilty, and understanding). (A) Distribution of the direct mapping responses, when participants were asked to map a word directly to the EmojiGrid (Label2Grid). (B) Distribution of the indirect (Label2Image2Grid) term-to-EmojiGrid mapping responses, when participants first linked a term to an image (Label2Image task) and then rated the image in terms of valence and arousal (Image2Grid task). Crosses signify group mean arousal and valence ratings.
Figure 11. Distribution of the mapping responses over the valence–arousal space, for four different words (disgusted, happy, guilty, and understanding). (A) Distribution of the direct mapping responses, when participants were asked to map a word directly to the EmojiGrid (Label2Grid). (B) Distribution of the indirect (Label2Image2Grid) term-to-EmojiGrid mapping responses, when participants first linked a term to an image (Label2Image task) and then rated the image in terms of valence and arousal (Image2Grid task). Crosses signify group mean arousal and valence ratings.
Foods 11 00972 g011
Figure 12. Mapping the EsSense25 terms to the EmojiGrid. Red symbols represent the average positions of the emotion terms that were directly mapped to the EmojiGrid in Label2Grid task. Blue symbols represent the average positions of the emotion terms that were mapped to the EmojiGrid via the food images (Label2Image task followed by Image2Grid task). Green arrows connect corresponding terms obtained through direct and indirect mapping. The gray labels correspond to the 12 domains (delineated by the gray radial axes) of the core affect model presented by Yik et al. [27]. The dashed circle serves as a visual reference for the eccentricity of the data points.
Figure 12. Mapping the EsSense25 terms to the EmojiGrid. Red symbols represent the average positions of the emotion terms that were directly mapped to the EmojiGrid in Label2Grid task. Blue symbols represent the average positions of the emotion terms that were mapped to the EmojiGrid via the food images (Label2Image task followed by Image2Grid task). Green arrows connect corresponding terms obtained through direct and indirect mapping. The gray labels correspond to the 12 domains (delineated by the gray radial axes) of the core affect model presented by Yik et al. [27]. The dashed circle serves as a visual reference for the eccentricity of the data points.
Foods 11 00972 g012
Table 1. The mean valence (V) and arousal (A) ratings resulting from the Image2Grid task and the corresponding values provided in the CROCUFID database (Vc, Ac; [40]) for each of the 20 selected food images (ID is the original image identifier in the CROCUFID database, the index c refers to values from the CROCUFID database).
Table 1. The mean valence (V) and arousal (A) ratings resulting from the Image2Grid task and the corresponding values provided in the CROCUFID database (Vc, Ac; [40]) for each of the 20 selected food images (ID is the original image identifier in the CROCUFID database, the index c refers to values from the CROCUFID database).
IDFood ImageVAVcAc
123Salad1_mold6.7178.515.1186.36
190Banana_mold7.0976.806.7775.80
167Salad2_mold8.2476.656.6283.15
152Pear_mold9.0572.418.4374.89
175Carpaccio38.0946.5134.9253.79
13Salad2_fresh48.6146.2553.1344.10
82Olives_feta48.6556.9247.6158.79
136Salami51.6551.0742.2352.03
250Burger57.9647.9558.0849.44
93Boiled_eggs58.0647.4958.3146.31
47Cookies58.9450.1763.0550.39
9Salad1_fresh62.0551.1160.9748.90
36Cucumber62.8246.3662.6750.57
70Melon64.2054.0463.7454.00
44Pineapple66.6953.8070.3960.70
162Bellpeppers67.8850.5870.6250.56
43Orange70.5751.7370.7755.10
4Apple74.1849.4366.9254.16
145Pastries77.6265.6979.0765.89
147Strawberries79.6267.5080.8564.95
Table 2. Mean valence and arousal ratings for each emotion term, determined from the direct (Label2Grid or L2G) and indirect (Label2Image2Grid or L2I2G) mapping of the terms to the valence–arousal space, together with the results of the Wilcoxon signed-rank test. Bold indicates a significant difference between the indirect and direct mappings. Here, n represents the number of participants that used the emotion terms at least once.
Table 2. Mean valence and arousal ratings for each emotion term, determined from the direct (Label2Grid or L2G) and indirect (Label2Image2Grid or L2I2G) mapping of the terms to the valence–arousal space, together with the results of the Wilcoxon signed-rank test. Bold indicates a significant difference between the indirect and direct mappings. Here, n represents the number of participants that used the emotion terms at least once.
ValenceArousal
nL2GL2I2GpL2GL2I2Gp
Understanding11671.4853.60<0.00130.0343.47<0.001
Wild13265.6252.77<0.00180.5857.58<0.001
Secure18178.8974.970.14730.6254.24<0.001
Aggressive18615.689.68<0.00184.8872.90<0.001
Tame19058.8355.030.051027.7639.67<0.001
Adventurous20675.7369.28<0.00173.6460.32<0.001
Active21380.7675.86<0.00565.3161.890.211
Warm23083.7478.29<0.00138.6062.83<0.001
Free23880.2176.70<0.00157.8659.440.656
Guilty24224.2460.23<0.00127.6960.17<0.001
Loving24788.5782.15<0.00159.4871.06<0.001
Enthusiastic25884.5378.06<0.00180.3871.58<0.001
Nostalgic26176.5674.750.19934.5158.11<0.001
Good27782.7273.70<0.00138.2157.39<0.001
Calm30671.9669.300.10824.5948.02<0.001
Mild33058.7953.75<0.00531.3641.82<0.001
Satisfied33485.3777.96<0.00140.4960.13<0.001
Worried34013.5614.460.96731.4764.09<0.001
Joyful36388.2681.83<0.00178.2669.31<0.001
Bored36436.8136.830.63516.5137.13<0.001
Interested37772.8069.770.37955.5758.030.213
Pleasant40883.1478.66<0.00140.0459.14<0.001
Good natured42884.9877.24<0.00151.4059.01<0.001
Happy43290.2781.21<0.00169.5865.09<0.001
Disgusted4736.119.31<0.00186.177.16<0.001
Table 3. Mapping between the EsSense25 emotion terms and the 12 domains of the core affect model presented by Yik et al. [27], from the study by Jaeger et al. [34] and obtained in this study through both direct mapping (Label2Grid task) and indirect mapping (Label2Image2Grid). Fractional numbers correspond to relative positions within domains (see text). Numbers in boldface indicate mappings obtained in this study that consistently differed more than two domains from those reported by Jaeger et al. [34].
Table 3. Mapping between the EsSense25 emotion terms and the 12 domains of the core affect model presented by Yik et al. [27], from the study by Jaeger et al. [34] and obtained in this study through both direct mapping (Label2Grid task) and indirect mapping (Label2Image2Grid). Fractional numbers correspond to relative positions within domains (see text). Numbers in boldface indicate mappings obtained in this study that consistently differed more than two domains from those reported by Jaeger et al. [34].
Core Affect Domain
Emotion TermJaeger et al. [34]Indirect MappingDirect Mapping
Adventurous13.12.6
Active13.23.1
Wild1.51.71.9
Enthusiastic1.52.72.6
Free1.53.43.5
Loving22.93.5
Joyful23.02.8
Happy33.13.1
Interested33.33.5
Good natured33.43.9
Pleasant33.44.6
Good33.44.7
Satisfied43.34.5
Secure43.75.1
Warm4.53.24.6
Nostalgic4.53.45.0
Understanding4.56.05.4
Mild4.56.26.2
Calm54.25.6
Tame66.16.3
Bored78.59.3
Guilty102.52.6
Disgusted1010.911.3
Aggressive10.511.011.5
Worried1110.79.1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Toet, A.; Van der Burg, E.; Van den Broek, T.J.; Kaneko, D.; Brouwer, A.-M.; Van Erp, J.B.F. Linking Categorical and Dimensional Approaches to Assess Food-Related Emotions. Foods 2022, 11, 972. https://doi.org/10.3390/foods11070972

AMA Style

Toet A, Van der Burg E, Van den Broek TJ, Kaneko D, Brouwer A-M, Van Erp JBF. Linking Categorical and Dimensional Approaches to Assess Food-Related Emotions. Foods. 2022; 11(7):972. https://doi.org/10.3390/foods11070972

Chicago/Turabian Style

Toet, Alexander, Erik Van der Burg, Tim J. Van den Broek, Daisuke Kaneko, Anne-Marie Brouwer, and Jan B. F. Van Erp. 2022. "Linking Categorical and Dimensional Approaches to Assess Food-Related Emotions" Foods 11, no. 7: 972. https://doi.org/10.3390/foods11070972

APA Style

Toet, A., Van der Burg, E., Van den Broek, T. J., Kaneko, D., Brouwer, A.-M., & Van Erp, J. B. F. (2022). Linking Categorical and Dimensional Approaches to Assess Food-Related Emotions. Foods, 11(7), 972. https://doi.org/10.3390/foods11070972

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop