Next Article in Journal
Experimental Investigation of Rotating Bending Fatigue Life of Knuckle and Screw Threads in AISI 1045 Steel
Next Article in Special Issue
Evaluation of a Company’s Media Reputation Based on the Articles Published on News Portals
Previous Article in Journal
Unsupervised Feature Space Analysis for Robust Motor Fault Diagnosis Under Varying Operating Conditions
Previous Article in Special Issue
Parsing Emotion in Classical Music: A Behavioral Study on the Cognitive Mapping of Key, Tempo, Complexity and Energy in Piano Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Affective EEG Decoding Generalizes Across Colormap and Exposure Time

1
Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40127 Bologna, Italy
2
Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068 Rovereto, Italy
3
Department of Medicine and Surgery, University of Parma, Via Volturno 39, 43125 Parma, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 1779; https://doi.org/10.3390/app16041779
Submission received: 16 December 2025 / Revised: 26 January 2026 / Accepted: 6 February 2026 / Published: 11 February 2026
(This article belongs to the Special Issue Multimodal Emotion Recognition and Affective Computing)

Abstract

Viewing emotional pictures modulates electrocortical activity during the first second, with functional properties that reflect the type of processing that is being carried out. Recently, the investigation of electrocortical activity has been aided by machine learning techniques, such as multivariate pattern analysis (MVPA). Building on previous studies that used MVPA to classify between emotional and neutral stimuli, here we investigate electroencephalographic (EEG) changes while a sample of n = 15 participants viewed emotional and neutral scenes that could be presented in color or in grayscale, and for either a short (24 ms) or a long (6 s) exposure time. A linear classifier was used to classify EEG patterns as consequential to the viewing of emotional (pleasant, unpleasant) vs. neutral scenes, and to assess the extent to which scalp activation patterns are specific to the perceptual conditions under which a scene is viewed (i.e., color or greyscale, short vs. long exposure time) or generalize across viewing conditions. We observed that emotional content could be significantly decoded through MVPA, with earlier classification onset for pleasant-neutral vs. unpleasant-neutral classification. Moreover, this classification generalized across perceptual conditions, indicating that the symbolic meaning of natural scenes drives the emotional modulation of scalp activity. These results further indicate that, within the first second after the onset of natural scenes, emotional states can be decoded from the EEG signal, and that such learning can be applied to flexibly classify emotional states under perceptually different conditions.

1. Introduction

The investigation of emotional responses has recently been complemented by the adoption of methods typical of artificial intelligence, such as machine learning. Aiming both at a theoretical investigation into the study of emotional responses and at the detection of emotional responses in applied contexts, several studies have sought to determine the extent to which emotional responses can be decoded [1,2]. Emotional responses are phasic and intense changes that happen while viewing, remembering, or anticipating significant stimuli, such as natural scenes, videos, music, or real-life events [3]. When these contents are viewed or anticipated, changes are observed in the subjective state (how one person feels), at the expressive/behavioral level (one person’s behavior), and at the level of physiological responses in the central nervous system (e.g., electrocortical or metabolic brain activity) and in the autonomic nervous system (e.g., modulation of electrodermal responses through its sympathetic branch). Importantly, while emotions subserve an adaptive function by preparing for action, different components of the emotional response serve various functions; for instance, expressive behavior serves a communicative function to other in-group mates, while activation of the sympathetic nervous system prepares the organism for overt action when highly relevant situations are encountered [4,5,6]. At the central level, brain processes support the processing, evaluation, and response to emotional stimuli, and different brain activities are sensitive to different properties of external stimuli.
Recently, in the domains of affective computing and brain–computer interfaces (BCIs), several studies have focused on developing learning architectures that can classify metabolic or electrocortical signals either within discrete emotional categories, or along continuous emotional dimensions [7]. Most of these studies relied on electrocortical changes recorded on the scalp through electroencephalography (EEG) while participants viewed videos over a sustained period (e.g., datasets SEED, DEAP, DREAMER; video length between 1 and 4 min; [8,9,10,11]). For instance, approaches that integrate temporal, spatial, and frequency information via combinations of convolutional neural networks (CNNs) and long short-term memory networks have substantially improved EEG decoding while viewing emotional versus neutral video clips (e.g., 4D-CRNN; [12]). This spatiotemporal integration has been further enhanced by incorporating attention mechanisms that assign differential weights to distinct EEG features ([13]), or by reducing the tendency of models to exploit idiosyncratic features in temporally adjacent EEG segments (temporal-difference minimizing neural network, TDMNN; [14]). To address inter-individual variability, additional methods have been proposed, including domain-adversarial neural networks that suppress subject-specific features in favor of more generalizable emotion-related representations (DANN-MAT; [15]), as well as contrastive learning approaches that maximize similarity between signals recorded at nearby electrodes while minimizing similarity across distant electrodes (CLRA; [16]), thereby improving cross-subject generalization. Moreover, other studies employed machine learning analysis of the EEG signal in real-life contexts to identify states that interfere with driving (such as stress or low attention) [17,18]. Taken together, these studies indicate remarkable success (up to 98% accuracy) in decoding emotional states from EEG over sustained periods. However, it is not clear to what extent emotional responses are decodable from the phasic EEG changes elicited during the first second of viewing static natural scenes.
The decoding of emotional responses from phasic changes occurring within the first second of viewing a scene is important, as it allows for more prompt detection and response to changes in the organism’s state. Several studies have investigated electrocortical changes while viewing emotional scenes, and observed that EEG signal is modulated by the visual stimulus both in an time-locked manner (i.e., with changes whose latency is temporally tied to the onset of the visual stimulus; evoked changes), and in a less time-locked manner, meaning that changes are present but differ in latency from trial to trial (induced changes; [19]). Averaging over several trials that repeat the same experimental condition, time-locked evoked EEG changes result in event-related potentials (ERPs), while induced changes are usually observed using time-frequency analyses. Notably, both analyses of ERPs and of EEG oscillations indicate that, within the first second from picture viewing, electrocortical activity is modulated by the arousing value of natural scenes. ERP research has consistently indicated that viewing emotional scenes elicits modulation of electrocortical activity at both an earlier time interval at occipito-temporal sites, with less positive ERPs for emotional compared with neutral stimuli (early posterior negativity, EPN) [20,21,22] and at a later time interval as a relative positivity for emotional over neutral trials over centro-parietal areas (late positive potential, LPP) [23,24,25,26,27]. The emotional modulation of both components is driven by the symbolic value, rather than by the perceptual features of natural scenes. Moreover, different factors affect the emotional modulation of these two components, with the EPN being more sensitive to low-level factors such as stimulus size and complexity, e.g., [28,29], and the LPP to higher-level ones such as stimulus repetition, e.g., [30,31,32]. Taken together, these results indicate that time-locked EEG activity is functionally sensitive to different external manipulations over time. One aim of the present study is to investigate whether perceptual manipulation affects the extent to which emotional responses can be decoded from the EEG signal. In order to flexibly detect and respond to changes in the organism’s emotional state, it is important that these emotional states are not contingent on specific perceptual conditions but are generalized across viewing conditions.
Machine learning approaches have been recently used to detect recognizable patterns in single-trial EEG signals (multivariate pattern analysis, MVPA) [1,2,33,34,35,36]. In a training phase, the EEG signal from single trials is given as input to an artificial classifier, which is expected to output the experimental condition to which the trial belonged (supervised training). Then, the output classification label is compared with the actual trial condition (ground truth); depending on the success of the classification, the classifier’s parameters are modified until an acceptable validation accuracy is achieved [37]. After such training, new neural signals are presented to the classifier, and classification accuracy is tested. If such classification is successful, it is concluded that neural activity differed between conditions, at least to the extent that allows for sufficient discrimination [38]. Recently, electrocortical activity in up to 16 classes has been decoded above chance (diagonal decoding) [37], testifying to the usefulness of this method in investigating neural activity [33,34].
MVPA can also be used to test for functional similarities between experimental conditions (cross-decoding). Specifically, to-be-learned trials may belong to one experimental context (e.g., an attend condition), while test trials may belong to a different context (e.g., a free-viewing condition). If neural patterns, and hence classification accuracy, are closely related to the experimental context in which learning happened, then no transfer to test trials will be observed, and testing accuracy in a different context will be at chance. On the other hand, if the experimental context does not affect neural patterns, then similarly successful classification will be observed in both the original and the different context. In this case, it is concluded that the EEG signals in the two experimental contexts are similar enough to allow for successful transfer [39,40]. In the context of affective perception, it is expected that factors that dampen electrocortical correlates of affective modulation (e.g., perceptual conditions for early affective ERP modulation) will reduce cross-decoding; in contrast, factors that do not influence affective processing will allow for efficient cross-decoding.
Recent studies applied MVPA to the analysis of emotional responses to natural scenes. In a previous study [2] that investigated neural responses using functional magnetic resonance imaging (fMRI) and EEG, it was observed that the affective content of trials could be decoded beginning approximately 200 ms after picture onset, and that decoding of pleasant vs. neutral stimuli temporally preceded the decoding of unpleasant vs. neutral stimuli (diagonal affective decoding). The emotional-neutral classification pattern was stable in the two seconds following stimulus onset, suggesting that sustained neural processes analyze emotional information. Similarly, another study decoded emotional states above chance based on steady state visual evoked potentials (SSVEP) [41]. However, little is known about the extent to which decoding results generalize across conditions (cross-decoding).
The aim of the present study is to investigate the extent to which the EEG signal supports flexible decoding of emotional states across perceptual conditions, enabling prompt detection and response to changes in the organism’s emotional state. To this end, the present study re-analyzes data from a previous EEG experiment [42], which involved the presentation of emotional (pleasant or unpleasant) and neutral scenes, while manipulating picture colormap (color vs. grayscale) and exposure time (short vs. long). The univariate results of the previous study indicated that the affective modulation of the LPP was not modulated by either colormap or exposure time. Moreover, affective modulation of the EPN was more pronounced for pleasant than for unpleasant contents and was not modulated by colormap or exposure time [42]. Based on these data, and on the descriptively defined timing of early and late ERP components, the present analysis aimed to investigate:
(a)
Whether affective processing can be decoded from EEG activity, specifically focusing on the time intervals typically used for EPN and LPP.
(b)
Whether the training that allows for successful decoding transfers between the conditions of colormap (color to grayscale, and vice versa) and exposure time (short to long, and vice versa). Based on previous studies that investigated the effects of perceptual manipulations on electrocortical responses to affective pictures, it is expected that the earlier interval (150–300 ms, temporally corresponding to the EPN) will be more sensitive to perceptual manipulations.
To this end, an established decoding method (MVPA via linear discriminant analysis) was used to investigate the functional properties of EEG changes. More specifically, we do not propose a novel machine learning algorithm in the field of affective computing, but rather investigate the functional conditions that enable learning and make it generalizable across conditions. The generalizability of decoding is a fundamental condition for extending the classification of EEG patterns beyond the conditions (here, color and exposure time) in which learning occurred, and therefore to be able to flexibly detect and respond to emotional changes in applied contexts.

2. Materials and Methods

2.1. Participants

Sixteen participants (eight women and eight men, age M = 27.27, SD = 3.86) with normal or corrected-to-normal visual acuity were recruited for this study. The choice of this sample size was supported by a formal power analysis done with Gpower* 3.1 [43], aiming at determining the required sample size for an effect size of η2p = 0.14 [44], a power of β = 0.80, and an α level of 0.05. The required sample size was fifteen participants, with an actual power of β = 0.83. At the time of the experiment, four participants were students, two were unemployed or looking for an occupation, and ten were employed. Due to a failure in data recording for one participant, the final sample consisted of 15 participants. Before the experiment, each participant provided informed consent. The experimental protocol conforms to the Declaration of Helsinki and was approved by the Ethical Committee of the University of Bologna.

2.2. Stimuli and Equipment

Stimuli comprised a total of 200 pictures taken from various sources, including the International Affective Picture System (IAPS) [45] and public domain images accessible on the Internet, portraying pleasant content (n = 60: erotic and romantic couples, opposite sex nudes), unpleasant content (n = 60: human threat scenes and injured bodies), and neutral pictures with people in daily contexts (n = 60). Concerning opposite sex nudes, each participant only viewed pictures that were opposite to the sex assigned at birth. For all pictures, two versions were created, one in color and one in greyscale. Stimuli were displayed on a 21-inch Samsung SyncMaster (Suwon, Republic of Korea) Cathode-Ray Tube (CRT) monitor (located 100 cm from the participant’s eyes, with an 800 × 600 pixel resolution and an 85 Hz refresh rate, resulting in a 22.6 horizontal and 17.1 vertical degrees of visual angle). Pictures were normalized in brightness and contrast (M = 0.61, SD = 0.02 on a 0–1 scale) and presented against a gray background (0.62 brightness on a 0–1 scale). The experiment was conducted using E-Prime 1.0 [46].

2.3. Experimental Procedure

Each trial began with a fixation cross displayed for 1 s, followed by a greyscale or color picture presented for either 24 milliseconds or a longer duration (6 s). These exposure times were chosen to be shorter than a saccade, and thus not to allow for visual exploration of the picture (short exposure time), or to allow for sustained picture viewing. Short exposure stimuli were not masked, allowing for visual persistence on the retina [47] and contributing to picture understanding. Each participant viewed each picture in only one of the four perceptual conditions (color pictures shown for 6 s, color pictures shown for 24 ms, black-and-white pictures shown for 6 s, greyscale pictures shown for 24 ms), resulting in a total of 180 trials across the experiment. Across participants, each picture was assigned to all four perceptual conditions. Seven seconds after the appearance of the picture, participants used the Self-Assessment Manikin (SAM) [48] to rate the valence and arousal elicited by the image [42]. Following the ratings, a blank screen was displayed for a random interval ranging from 2 to 3 s.

2.4. EEG Recording and Pre-Processing

The EEG was recorded at 256 Hz from 256 active electrodes, referenced to an active electrode (CMS = common mode sense, with ground in the additional electrode DRL = driven right leg), using an ActiveTwo BioSemi system. During data analysis, data were re-referenced to the average of all channels. EEG was epoched from −500 to +1000 ms from stimulus onset [42], and a 100 ms pre-stimulus baseline was subtracted. MVPA is robust to artifacts and implicitly models noise in the data [35,49]; therefore, all the recorded trials were analyzed without applying artifact rejection or electrode interpolation [50,51]. EEG data were down-sampled to 64 Hz by averaging consecutive time points, such that each resulting time point represented 15.6 ms of averaged signal (in a preliminary analysis, consistent results were observed without down-sampling). No anti-aliasing lowpass filters were applied to prevent false-positive decoding accuracies [35,52,53].

2.5. Multivariate Pattern Analysis

MVPA was implemented using ADAM 1.14.beta (Amsterdam Decoding and Modeling, http://www.fahrenfort.com/ADAM.htm, accessed on 5 February 2026), a MATLAB (version R2021b) toolbox for decoding M/EEG data [37]. In all analyses, decoding was performed at each time point within the epoched-EEG data and across all 256 channels. The session and analysis flowchart is shown in Figure 1. Firstly, we performed a decoding analysis (affective condition decoding) to determine if emotional scenes evoke specific affective representations compared to neutral images. A linear discriminant analysis (LDA) classifier was trained to distinguish between emotional and neutral scenes (in this analysis, data were collapsed across the four perceptual conditions). The LDA classifier was chosen because previous studies indicated that it performs well in classifying neural signals [35]. Specifically, a two-class classification approach was used: pleasant vs. neutral; unpleasant vs. neutral. To prevent over-fitting, a 15-fold leave-one-out cross-validation partitioning was used, with each fold containing one trial from each of the four perceptual conditions for both emotional and neutral images.
After that, we carried out a cross-condition decoding analysis to investigate whether affective representations are stable when changing perceptual conditions. Using an LDA classifier to distinguish emotional from neutral scenes, we replaced the usual cross-validation scheme with a new training/testing split. To assess the influence of color, the training set exclusively comprised trials featuring scenes in color (both 6 s and 24 ms presentations), while the testing set consisted of trials presented in greyscale (both 6 s and 24 ms; CtoG), or vice versa (training on greyscale and testing on color; GtoC). Similarly, to examine the influence of exposure time, training involved trials presented for 6 s (both color and greyscale) and testing involved trials presented for 24 ms (both greyscale and color; LtoS), or vice versa (training on short and testing on long exposure time; StoL). Thus, this cross-condition procedure allows us to evaluate whether neural representations evoked by emotional scenes are stable across different perceptual conditions, such as presentation time and the presence or absence of color.
For both affective condition decoding and cross-decoding, we used two types of analysis: diagonal decoding and topographical maps. (1) Diagonal decoding involved training and testing a classifier with data recorded at each time point within the epoched-EEG data and across all channels. (2) Topographical maps were created to gain insight into the scalp activations underlying decodable information. We calculated the product between the classifier weights obtained from each electrode and the original data covariance matrix [54]. Topographical maps were computed for the two time windows of interest: an earlier one from 150 to 300 ms (overlapping with EPN), and a later one from 400 to 800 ms (overlapping with LPP).
To mitigate variability in decoding performance due to random trial assignment across folds, we adopted a pipeline to objectively identify the iteration most representative of the true decoding performance (see Appendix A). For cross-decoding analysis, this procedure was unnecessary because the training and testing sets are independent by design. All scripts for analysis are available at https://osf.io/t9y2f/?view_only=a93d3fc165204d3785be1490ee1b222c, accessed on 5 February 2026.

2.6. Statistical Testing

Statistical analyses were performed in ADAM 1.14.beta [37]. The area under the curve (AUC) [37,55] was used as the performance measure of the classifier, as it accounts for both the hit rate (the proportion of trials A correctly classified as A) and the false alarm rate (the proportion of trials B incorrectly classified as A). The AUC considers the degree of confidence that the classifier holds regarding the class membership of individual trials and, therefore, is regarded as a better measure for performing decoding analysis compared to the raw accuracy [37]. AUC values at each time point were assessed with two-tailed t-tests against the chance level (0.5). Cluster-based permutations were then used to perform multiple-comparison correction. Specifically, contiguously significant t-tests belonging to the same cluster were summed to determine the respective cluster size, and compared against the null distribution of cluster size obtained from 10,000 random permutations [56]. An additional analysis, reported in Appendix B, investigated the temporal generalization of decoding.
The same procedure was applied to the topographical maps obtained from the classifier weights. Significant sensors were selected using a two-tailed t-test against zero, and then corrected with a cluster-based permutation to identify clusters of significant contiguous electrodes.
For the univariate analysis of these data, as well as for a detailed analysis of the relationship between ERPs and subjective responses, the reader is referred to [42].

3. Results

3.1. Affective Condition Decoding

Diagonal decoding achieved significantly above-chance accuracies for EEG epochs that were elicited by viewing both pleasant and unpleasant scenes (Figure 2A). Pleasant vs. neutral decoding revealed a significant cluster ranging from 150 to 980 ms after picture onset (p < 0.0001). Unpleasant vs. neutral reached significant accuracies within two clusters, the first one from 290 to 760 ms (p < 0.0001) and the second, smaller one from 790 to 910 ms (p = 0.018).
In terms of topographical maps (Figure 2B), affective condition decoding for pleasant vs. neutral showed two clusters of significant electrodes in the early time window from 150 to 300 ms, one positive scalp distribution covering central-frontal regions (p = 0.0002), and one negative distribution over occipital regions (p = 0.024). The later time window from 400 to 800 ms showed a scalp distribution extending from frontal to central-parietal regions (p = 0.0005). For the unpleasant vs. neutral decoding, in the early time window, a cluster of electrodes covering central-parietal regions revealed significant results (p = 0.01), although diagonal decoding was not significant in the same interval. Finally, the later time window revealed a significant cluster with a scalp distribution covering central-parietal regions (p = 0.0014).

3.2. Generalizability of Affective Representations Across Perceptual Conditions

The cross-decoding analysis (Figure 3) aimed to test the extent to which affective representations in a perceptual condition would generalize to a different perceptual condition. Overall, the results indicate that changes in color and exposure time did not disrupt affective decoding; however, they altered the timing with which representations of unpleasant images are decodable from neutral ones.
When examining color changes, training on color scenes and testing on their grayscale counterparts (CtoG), as well as the opposite combination (GtoC), both led to significant above-chance accuracies. In the color-to-greyscale (CtoG) condition, pleasant vs. neutral decoding was significantly above chance from 130 to 990 ms after picture onset (p < 0.0001). Unpleasant vs. neutral decoding reached significance later, from 560 to 910 ms (p = 0.0006). Similar results were observed in the opposite training/testing subdivision from greyscale to color (GtoC), with a significant cluster for pleasant vs. neutral from 150 to 990 ms (p < 0.0001), and for unpleasant vs. neutral from 540 to 770 ms (p = 0.002).
For changes in exposure time, training on scenes presented for 6 s and testing on short exposures (24 ms; LtoS), as well as the reverse combination (StoL), both yielded significantly above-chance accuracies for pleasant and unpleasant scenes. In the long-to-short exposure times (LtoS), pleasant vs. neutral had significant decoding accuracies in a cluster from 150 to 760 ms (p < 0.0001), and in a second smaller cluster from 880 to 960 ms (p = 0.024). Accuracy of unpleasant vs. neutral decoding reached significance in a later time window, from 490 to 700 ms (p = 0.0018), and from 870 to 930 (p = 0.015). Concerning short-to-long exposure times (StoL), pleasant vs. neutral decoding was significant from 160 to 990 ms (p = 0.0001). With regard to unpleasant vs. neutral, three significant clusters of decoding accuracies were observed, namely from 370 to 460 ms (p = 0.025), from 490 to 680 ms (p = 0.0033), and from 840 to 930 ms (p = 0.012).
An additional analysis aimed at comparing the generalization of decoding across color vs. across exposure time (Figure 4). To this end, we computed the average of cross decoding of colormap (CtoG and GtoC), obtaining significant decoding accuracies for pleasant vs. neutral starting at 150 ms (p < 0.0001), and for unpleasant vs. neutral from 540 ms on (p = 0.0024). The same procedure for cross-decoding of exposure time (LtoS and StoL) yielded similar results. Pleasant vs. neutral reached significance at 150 ms (p < 0.0001), and unpleasant vs. neutral in two time windows: the first from 490 ms to 700 ms (p = 0.0041) and the second from 790 ms to 930 ms (p = 0.0087). To statistically assess whether one perceptual condition exerted a stronger impact on affective modulation than the other, we directly compared cross-decoding performance across exposure time with cross-decoding across colormaps for pleasant vs. neutral scenes, and conducted the same comparison for unpleasant vs. neutral scenes. These analyses revealed no evidence that either perceptual condition had a greater impact than the other, for both pleasant and unpleasant conditions (all p > 0.05).
Concerning topographical maps (Figure 3B), in the color to greyscale (CtoG) condition, pleasant vs. neutral showed positive distribution of sensor over central-frontal regions in both time windows (earlier: p = 0.0004; later: p = 0.0037), and a negative distribution over occipital regions in the early time window (p = 0.018). The topography of unpleasant vs. neutral conditions did not show any significant electrodes in the earlier time window (largest nonsignificant cluster, p = 0.084), consistent with the lack of significant classification accuracies in diagonal decoding analysis; in the later time window, a scalp distribution of significant positive weights over central-parietal regions was observed (p = 0.011). Similarly, in the cross-condition decoding from greyscale to color (GtoC), a positive scalp distribution covering central-parietal regions was observed for pleasant vs. neutral in both time windows (early: p = 0.0002; late: p = 0.0002), and a negative topography was observed over occipital regions in the earlier time window (p = 0.024). For the comparison of unpleasant vs. neutral, no significant sensors were detected in the early time window (largest nonsignificant cluster, p = 0.08), whereas a significant positive scalp distribution spanning central-parietal regions was observed in the late time window (p = 0.0003).
Cross-decoding from long to short (LtoS) of pleasant vs. neutral revealed a positive scalp distribution covering central-frontal regions in both time windows (early: p = 0.0002; late: p = 0.0014), and a negative one over occipital regions in the earlier time window (p = 0.019). Unpleasant vs. neutral cross-decoding showed no significant electrodes in the early time window (largest nonsignificant cluster, p = 0.15), but in the late time window, a positive scalp distribution spanning central-parietal regions (p = 0.008). Consistently, cross-decoding from short to long (StoL) of pleasant vs. neutral indicated a positive scalp distribution over central-parietal regions in both time windows (early: p > 0.0001; late: p = 0.0011), and a negative one over occipital regions in the earlier time window (p = 0.024). Unpleasant vs. neutral cross-decoding did not show any significant electrodes in the earlier time window (largest nonsignificant cluster, p = 0.066), while in the later time window, a positive scalp distribution over central-parietal regions was observed (p = 0.0007).

4. Discussion

Consistent with previous results, affective category (emotional vs. neutral) could be decoded from the raw EEG signal at both an earlier and later interval, and the training that allows for successful decoding can be transferred between perceptual conditions. Notably, no significant decoding was observed before 150 ms. The present results are consistent with several previous studies that examined the electrocortical affective modulation of EEG, both in terms of ERPs [20,21,23,24,25,26,27,30,31,57,58,59] and of EEG oscillations [60]. In light of the need to replicate and establish solid findings in the field of affective sciences [61], the present results add to previous case-by-case studies [62,63] and establish these two EEG time windows as the two temporal intervals in which the most reliable affective modulations were observed. Moreover, the temporal generalization analysis indicated that when training was successful on earlier EEG signals (i.e., it allowed for accurate performance), above-chance accuracy was also observed when testing at later intervals. Altogether, this pattern indicates that, while EEG affective modulation may be delayed in some conditions (here, unpleasant vs. neutral compared with pleasant vs. neutral differentiation), once engaged, it remains sustained for a longer period.
Machine learning algorithms create and test models of the most likely response pattern (i.e., a model of neural response) associated with an experimental modulation, with the ultimate aim of predicting neural responses. In terms of analysis rationale, decoding differs from the causal analysis approach adopted, e.g., in inferential statistics (e.g., ANOVA tests), where independent variables are manipulated and the effect of those (known) variables is tested on dependent variables. While such statistical approaches are extremely informative in understanding the causal relationship between manipulated factors and observed measures, nevertheless they do not allow one to establish which is the most likely pattern which is predicted to happen in one condition; this is one of the reasons why psychology and associated disciplines (e.g., psychophysiology, neurosciences) are sometimes suggested to have a low predictive power [36]. Machine learning systems, on the other hand, are specifically designed to learn patterns from real data, predict new data, and assess the match between the predicted pattern and the new data. Here and in previous EEG studies, the EEG signal has been sufficient to distinguish between affective and neutral conditions [1,2] or to determine the orientation of simple stimuli [33,34]. Therefore, machine learning approaches are potentially useful in applied contexts, such as in brain–computer interfaces (BCIs), where the detection of EEG response patterns from single-trial data is required.
Here, an additional issue was to investigate whether the electrocortical changes associated with affective valence were specific to experimental conditions (colormap and exposure time) or varied across different experimental conditions. We observed that whenever successful decoding occurred in one condition, it transferred to other perceptual conditions, similarly for colormap and exposure time. This result is consistent with the univariate result originally observed in ERPs [42] and with previous studies [20], which reported that neither the interaction between colormap and valence nor that between exposure time and valence was significant for the LPP or EPN. The generalization which is observed in the cross-decoding analysis complements the univariate result of lack of significant interaction, as it shows that training in one condition (e.g., with greyscale stimuli) systematically (i.e., significantly above chance) led to good decoding accuracy with both greyscale and color stimuli, avoiding issues of low power or null hypothesis significance testing (NHST) issues when a nonsignificant effect is observed [44,64]. Taken together, these results support the interpretation that modulation of emotional response is primarily driven by semantic meaning [65,66,67] and not by perceptual features such as color [68,69,70]. These results are consistent with previous studies that examined the relationship between emotional response and visual recognition, either by degrading stimuli and assessing affective response [66,67,71,72,73] or by measuring the latency of recognition and emotional response [74,75]. These studies observed that emotional response, rather than being tied to the detection of hard-wired perceptual features, depends on the semantic recognition of the content of a scene.
Consistent with previous studies, here we did not observe a temporal precedence in the processing of negative compared with positive contents [1]. On the contrary, using a broad range of positive and negative affective contents, we observed that decoding of negative vs. neutral conditions could be performed only later in time, compared with positive vs. neutral decoding. Therefore, these data do not support the view that the processing of negative stimuli is temporally prioritized [76,77].

Limitations and Future Directions

Despite the robustness of the decoding results, some methodological constraints should be acknowledged. First, here we used an LDA classifier to separate emotional from neutral EEG signals after viewing emotional vs. neutral scenes [7]. However, several other algorithms can be applied to emotion recognition, including support vector machines (SVMs), K-Nearest Neighbors (KNNs), Random Forests (RFs), Naive Bayes (NB), and convolutional neural networks (CNNs). While the present aim was not to compare different algorithms, but rather to understand whether the properties of EEG emotional modulations generalize across conditions, it is possible that other approaches yield classification results that are perceptually specific rather than generalized. Future studies may therefore adopt different methods to investigate cross-condition decoding or develop new decoding strategies for this purpose. Moreover, the relatively small sample size (n = 15) and the limited number of trials per condition (60 for neutral, pleasant, and unpleasant) may have reduced the statistical power to detect more subtle effects; however, other databases in the field of affective computing have similar number of participants (e.g., SEED, DEAP, DREAMER) and a smaller number of stimuli. Moreover, in the cross-decoding analysis, the dataset was further split in half depending on the perceptual condition (e.g., grayscale vs. color), potentially reducing the sensitivity to detect subtle effects for the unpleasant vs. neutral condition in the earlier time window. One issue for future research and application concerns revealing which type of information allows for efficient decoding. In EEG decoding, all information that is recorded on the scalp is used, including ERPs, oscillations, but also artifactual signals such as muscular activities, blinks, and so on, and classification algorithms are notoriously “greedy,” i.e., they tend to exploit any available source of information to distinguish between conditions [78]. Therefore, it is important to understand which is the most relevant information for classification, to enhance it through specific experimental designs (e.g., that limit muscular or ocular artifacts as in the present research), or through specific analysis routines. Finally, future studies may supplement EEG decoding with decoding of emotional responses from other measures, e.g., those modulated by the peripheral nervous system, such as skin conductance or heart rate change [3,79].

5. Conclusions

Here, decoding was successfully employed in distinguishing EEG patterns that followed the viewing of emotional and neutral pictures, and these patterns were similar enough across conditions (colormap and exposure time) to allow for perceptual transfer of decoding. Altogether, the success in decoding the affective valence (pleasant or unpleasant, compared with neutral) of a condition testifies that sufficient information is present in unfiltered EEG single-trial data that characterizes affective processing, and supports the finding that the symbolic meaning of natural scenes, rather than their perceptual features, guided EEG affective modulation. Moreover, the present results indicate that the perceptual dimensions investigated (color and exposure time) allow for generalization, and this generalization is more evident for the later vs. earlier time interval examined, and for the viewing of pleasant vs. neutral compared with unpleasant vs. neutral scenes. As such, classification of EEG patterns in response to the viewing of emotional natural scenes can be flexibly applied beyond the conditions (here, color and exposure time) in which learning took place.

Author Contributions

Conceptualization: A.D.C., M.C. and V.F.; Methodology: A.D.C., M.C., V.F. and A.B.; Software: A.D.C. and A.B.; Validation: A.D.C., M.C., V.F. and A.B.; Formal Analysis: A.D.C. and A.B.; Investigation: A.D.C., M.C. and V.F.; Resources: A.D.C., M.C. and V.F.; Data Curation: A.D.C. and A.B.; Writing—Original Draft Preparation: A.D.C., M.C., V.F. and A.B.; Writing—review and Editing: A.D.C., M.C., V.F. and A.B.; Visualization: A.D.C., M.C., V.F. and A.B.; Supervision: A.D.C., M.C. and V.F.; Project Administration: A.D.C., M.C. and V.F.; Funding acquisition: A.D.C., M.C. and V.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Department of Psychology (protocol code 345, date of approval 21 July 2010).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Code for all the analyses carried out in the present manuscript has been published at https://osf.io/t9y2f/?view_only=a93d3fc165204d3785be1490ee1b222c, accessed on 5 February 2026. Experimental data will be made available upon request from the Corresponding Author.

Acknowledgments

The authors are also grateful to all participants who took part in the study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MVPAMultivariate Pattern Analysis
ERPsEvent Related Potentials
EPNEarly Posterior Negativity
LPPLate Positive Potential
EEGElectroencephalography
fMRIfunctional Magnetic Resonance Imaging
SSVEPSteady State Visual Evoked Potentials
IAPSInternational Affective Picture System
CRTCathode-Ray Tube
SAMSelf-Assessment Manikin
CMSCommon Mode Sense
DRLDriven Right Leg
ADAMAmsterdam Decoding And Modeling
M/EEGMagneto/Electro-Encephalography
LDALinear Discriminant Analysis
CtoGColor to Grayscale
GtoCGrayscale to Color
LtoSLong to Short
StoLShort to Long
AUCArea Under The Curve
BCIBrain–Computer Interface
NHSTNull Hypothesis Significance Testing
SVMSupport Vector Machine
KNNK-Nearest Neighbors
RFRandom Forest
NBNative Bayes
CNNConvolutional Neural Networks

Appendix A. Analysis Procedure to Guard Against Type I Error

Given the cross-validation partitioning used in affective condition decoding and the variability it introduces across iterations, we implemented a 3-step pipeline to guard against Type I error (Figure A1). (1) The decoding analysis was repeated 30 times, with each iteration following the same trial randomization structure across all individual subjects. Then, the Global Mean performance was computed across the 30 iterations, reflecting the average degree of discriminability between the two conditions. (2) For each of the 30 iterations and each participant, the Global Mean was subtracted, resulting in 30 differentials for all time points. (3) Finally, these differentials were compared to zero with a one-sample t-test in the time window of interest (from 150 to 800 ms), and the iteration with the smallest effect size (Cohen’s d) was considered the most similar to the Global Mean and consequently the most representative. Overall, this procedure allows for the objective selection of the iteration that best represents the average classification performance.
Figure A1. Analysis procedure to guard against type I error. The first row shows individual decoding iterations and the Global Mean, the second row shows differentials obtained by subtracting the Global Mean from each iteration, and the third row shows the corresponding effect sizes. The green line at the top of the diagonal decoding graphs shows the time window of interest analyzed (from 150 to 800 ms). Dashed lines represent chance performance (horizontal) and stimulus onset (vertical). Overall, the figure shows that Iteration 1 is the most representative, as it is closest to the Global Mean, yielding the smallest differential and, consequently, the smallest effect size.
Figure A1. Analysis procedure to guard against type I error. The first row shows individual decoding iterations and the Global Mean, the second row shows differentials obtained by subtracting the Global Mean from each iteration, and the third row shows the corresponding effect sizes. The green line at the top of the diagonal decoding graphs shows the time window of interest analyzed (from 150 to 800 ms). Dashed lines represent chance performance (horizontal) and stimulus onset (vertical). Overall, the figure shows that Iteration 1 is the most representative, as it is closest to the Global Mean, yielding the smallest differential and, consequently, the smallest effect size.
Applsci 16 01779 g0a1

Appendix B. Temporal Generalization Matrices

The temporal dynamics of neural representations evoked by emotional scenes were assessed using temporal generalization [80]. In this approach, a classifier is trained at a given time point (e.g., 400 ms) and tested at a different time point (e.g., 450 ms). Significant decoding performance across time points indicates that neural representations are similar between training and testing, suggesting representational stability. Conversely, if neural representations are highly dynamic and change rapidly over time, decoding performance will not generalize to later time points and will therefore not reach significance.
Temporal dynamics of neural representations were tested using temporal generalization matrices (Figure A2). For affective condition decoding, pleasant vs. neutral evoked a temporal generalization starting from 150 ms until 1000 ms, namely the end of the epoch analyzed (p < 0.0001); therefore covering both time windows of interest (early: 93% of significant timepoints, late: 100% of significant timepoints). Unpleasant vs. neutral led to a temporal generalization from 210 ms until 1000 ms (p = 0.0009), specifically, the early time window was only partially significant (33%), unlike the later one (92%). Overall, similar to diagonal decoding, unpleasant vs. neutral scenes elicited affective representations that were much less stable in the early compared with the late time window, and in the early time interval compared to pleasant vs. neutral scenes. Both the decoding of pleasant and unpleasant scenes showed high temporal generalization in the late window.
Moving to cross-decoding from color to greyscale (CtoG), pleasant vs. neutral showed a significant temporal generalization from 150 ms to 1000 ms (p < 0.0001), covering both time windows (early: 98% of significant timepoints, late: 100% of significant timepoints). On the other hand, significant cross-decoding of unpleasant vs. neutral began at 400 ms and finished at 1000 ms (p = 0.0066), but not covering the entirety of the later time window (early: 0%, late:49%). A consistent result was observed for cross-decoding from greyscale to color (GtoC), with pleasant vs. neutral yielding a significant temporal generalization from 150 ms to 1000 ms (p < 0.0001) in both time windows (early: 94%, late: 100%), whereas unpleasant vs. neutral elicited a sustained representation from 480 ms until 1000 ms (p = 0.011), partially covering the late time window (early: 0%, late: 54%).
Finally, cross-decoding from long to short (LtoS) for pleasant vs. neutral induced a significant temporal generalization from 130 ms to 1000 ms (p < 0.0001) in both time windows (early: 93%, late: 98%), whereas unpleasant vs. neutral yielded a significant result from 400 ms to 1000 ms (p = 0.01), partially covering the late time window (early: 0%, late: 56%). Similarly, cross-decoding from short to long exposure time (StoL) indicated significant temporal generalization for pleasant vs. neutral from 150 ms to 1000 ms (p < 0.0001), covering both time windows (early: 90%, late: 100%). Unpleasant vs. neutral yielded a significant generalization from 300 ms to 1000 ms (p = 0.0015), covering most of the late time window (early: 0%, late: 66%).
Figure A2. Temporal generalization matrix showing the dynamics of neural representations over time. On the top pleasant vs. neutral, on the bottom unpleasant vs. neutral. Bold contours in bordeaux enclose time points with significant classification after cluster correction at p < 0.05 (two-tailed t-tests against the chance level). The green-colored squares represent the two time windows of interest. The y-axis represents training time, while the x-axis represents testing time.
Figure A2. Temporal generalization matrix showing the dynamics of neural representations over time. On the top pleasant vs. neutral, on the bottom unpleasant vs. neutral. Bold contours in bordeaux enclose time points with significant classification after cluster correction at p < 0.05 (two-tailed t-tests against the chance level). The green-colored squares represent the two time windows of interest. The y-axis represents training time, while the x-axis represents testing time.
Applsci 16 01779 g0a2

References

  1. Bo, K.; Yin, S.; Liu, Y.; Hu, Z.; Meyyappan, S.; Kim, S.; Keil, A.; Ding, M. Decoding Neural Representations of Affective Scenes in Retinotopic Visual Cortex. Cereb. Cortex 2021, 31, 3047–3063. [Google Scholar] [CrossRef] [PubMed]
  2. Bo, K.; Cui, L.; Yin, S.; Hu, Z.; Hong, X.; Kim, S.; Keil, A.; Ding, M. Decoding the Temporal Dynamics of Affective Scene Processing. NeuroImage 2022, 261, 119532. [Google Scholar] [CrossRef] [PubMed]
  3. Bradley, M.M.; Lang, P.J. Emotion and Motivation. In Handbook of Psychophysiology; Cacioppo, J.T., Tassinary, L.G., Berntson, G., Eds.; Cambridge University Press: Cambridge, UK, 2007; pp. 581–607. [Google Scholar]
  4. Arnold, M.B. Feelings and Emotions; Academia Press: New York, NY, USA, 1970. [Google Scholar]
  5. Lang, P.J. Image as Action: A Reply to Watts and Blackstock. Cogn. Emot. 1987, 1, 407–426. [Google Scholar] [CrossRef]
  6. Frijda, N.H. The Emotions; Studies in emotion and social interaction; Cambridge University Press: Cambridge, UK; London, UK; New York, NY, USA; Paris, France, 1986. [Google Scholar]
  7. Chen, J.; Cui, Y.; Wei, C.; Polat, K.; Alenezi, F. Advances in EEG-Based Emotion Recognition: Challenges, Methodologies, and Future Directions. Appl. Soft Comput. 2025, 180, 113478. [Google Scholar] [CrossRef]
  8. Zheng, W.-L.; Lu, B.-L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  9. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.-S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis; Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
  10. Katsigiannis, S.; Ramzan, N. DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals From Wireless Low-Cost Off-the-Shelf Devices. IEEE J. Biomed. Health Inform. 2018, 22, 98–107. [Google Scholar] [CrossRef]
  11. Jafari, M.; Shoeibi, A.; Khodatars, M.; Bagherzadeh, S.; Shalbaf, A.; García, D.L.; Gorriz, J.M.; Acharya, U.R. Emotion Recognition in EEG Signals Using Deep Learning Methods: A Review. Comput. Biol. Med. 2023, 165, 107450. [Google Scholar] [CrossRef]
  12. Shen, F.; Dai, G.; Lin, G.; Zhang, J.; Kong, W.; Zeng, H. EEG-Based Emotion Recognition Using 4D Convolutional Recurrent Neural Network. Cogn. Neurodyn. 2020, 14, 815–828. [Google Scholar] [CrossRef]
  13. Xiao, G.; Shi, M.; Ye, M.; Xu, B.; Chen, Z.; Ren, Q. 4D Attention-Based Neural Network for EEG Emotion Recognition. Cogn. Neurodyn. 2022, 16, 805–818. [Google Scholar] [CrossRef]
  14. Ju, X.; Li, M.; Tian, W.; Hu, D. EEG-Based Emotion Recognition Using a Temporal-Difference Minimizing Neural Network. Cogn. Neurodyn. 2024, 18, 405–416. [Google Scholar] [CrossRef]
  15. Ju, X.; Wu, X.; Dai, S.; Li, M.; Hu, D. Domain Adversarial Learning with Multiple Adversarial Tasks for EEG Emotion Recognition. Expert Syst. Appl. 2025, 266, 126028. [Google Scholar] [CrossRef]
  16. Dai, S.; Li, M.; Wu, X.; Ju, X.; Li, X.; Yang, J.; Hu, D. Contrastive Learning of EEG Representation of Brain Area for Emotion Recognition. IEEE Trans. Instrum. Meas. 2025, 74, 2506913. [Google Scholar] [CrossRef]
  17. Chen, J.; Fan, F.; Wei, C.; Polat, K.; Alenezi, F. Decoding Driving States Based on Normalized Mutual Information Features and Hyperparameter Self-Optimized Gaussian Kernel-Based Radial Basis Function Extreme Learning Machine. Chaos Solitons Fractals 2025, 199, 116751. [Google Scholar] [CrossRef]
  18. Chen, J.; Cui, Y.; Wei, C.; Polat, K.; Alenezi, F. Driver Fatigue Detection Using EEG-Based Graph Attention Convolutional Neural Networks: An End-to-End Learning Approach with Mutual Information-Driven Connectivity. Appl. Soft Comput. 2026, 186, 114097. [Google Scholar] [CrossRef]
  19. Herrmann, C.S.; Grigutsch, M.; Busch, N.A. EEG Oscillations and Wavelet Analysis. In Event-Related Potentials: A Methods Handbook; Handy, T.C., Ed.; A Bradford book; MIT Press: Cambridge, MA, USA, 2005; pp. 229–259. [Google Scholar]
  20. Junghöfer, M.; Bradley, M.M.; Elbert, T.R.; Lang, P.J. Fleeting Images: A New Look at Early Emotion Discrimination. Psychophysiology 2001, 38, 175–178. [Google Scholar] [CrossRef]
  21. Schupp, H.T.; Junghöfer, M.; Weike, A.I.; Hamm, A.O. Attention and Emotion: An ERP Analysis of Facilitated Emotional Stimulus Processing. NeuroReport 2003, 14, 1107–1110. [Google Scholar] [CrossRef]
  22. Schupp, H.T.; Markus, J.; Weike, A.I.; Hamm, A.O. Emotional Facilitation of Sensory Processing in the Visual Cortex. Psychol. Sci. 2003, 14, 7–13. [Google Scholar] [CrossRef]
  23. Codispoti, M.; Ferrari, V.; De Cesarei, A.; Cardinale, R. Implicit and Explicit Categorization of Natural Scenes. In Progress in Brain Research; Elsevier: Amsterdam, The Netherlands, 2006; Volume 156, pp. 53–65. [Google Scholar]
  24. Cuthbert, B.N.; Schupp, H.T.; Bradley, M.M.; Birbaumer, N.; Lang, P.J. Brain Potentials in Affective Picture Processing: Covariation with Autonomic Arousal and Affective Report. Biol. Psychol. 2000, 52, 95–111. [Google Scholar] [CrossRef]
  25. Johnston, V.S.; Miller, D.R.; Burleson, M.H. Multiple P3s to Emotional Stimuli and Their Theoretical Significance. Psychophysiology 1986, 23, 684–694. [Google Scholar] [CrossRef]
  26. Keil, A.; Bradley, M.M.; Hauk, O.; Rockstroh, B.; Elbert, T.; Lang, P.J. Large-scale Neural Correlates of Affective Picture Processing. Psychophysiology 2002, 39, 641–649. [Google Scholar] [CrossRef] [PubMed]
  27. Radilová, J. The Late Positive Component of Visual Evoked Response Sensitive to Emotional Factors. Act. Nerv. Super. 1982, 3, 334–337. [Google Scholar]
  28. Bradley, M.M.; Hamby, S.; Löw, A.; Lang, P.J. Brain Potentials in Perception: Picture Complexity and Emotional Arousal. Psychophysiology 2007, 44, 364–373. [Google Scholar] [CrossRef] [PubMed]
  29. De Cesarei, A.; Codispoti, M. When Does Size Not Matter? Effects of Stimulus Size on Affective Modulation. Psychophysiology 2006, 43, 207–215. [Google Scholar] [CrossRef]
  30. Codispoti, M.; Ferrari, V.; Bradley, M.M. Repetition and Event-Related Potentials: Distinguishing Early and Late Processes in Affective Picture Perception. J. Cogn. Neurosci. 2007, 19, 577–586. [Google Scholar] [CrossRef]
  31. Ferrari, V.; Bradley, M.M.; Codispoti, M.; Lang, P.J. Repetitive Exposure: Brain and Reflex Measures of Emotion and Attention. Psychophysiology 2011, 48, 515–522. [Google Scholar] [CrossRef]
  32. Ferrari, V.; Codispoti, M.; Bradley, M.M. Repetition and ERPs during Emotional Scene Processing: A Selective Review. Int. J. Psychophysiol. 2017, 111, 170–177. [Google Scholar] [CrossRef]
  33. Bae, G.-Y.; Luck, S.J. Dissociable Decoding of Spatial Attention and Working Memory from EEG Oscillations and Sustained Potentials. J. Neurosci. 2018, 38, 409–422. [Google Scholar] [CrossRef]
  34. Bae, G.-Y.; Luck, S.J. Decoding Motion Direction Using the Topography of Sustained ERPs and Alpha Oscillations. NeuroImage 2019, 184, 242–255. [Google Scholar] [CrossRef]
  35. Grootswagers, T.; Wardle, S.G.; Carlson, T.A. Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data. J. Cogn. Neurosci. 2017, 29, 677–697. [Google Scholar] [CrossRef]
  36. Yarkoni, T.; Westfall, J. Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning. Perspect. Psychol. Sci. 2017, 12, 1100–1122. [Google Scholar] [CrossRef] [PubMed]
  37. Fahrenfort, J.J.; Van Driel, J.; Van Gaal, S.; Olivers, C.N.L. From ERPs to MVPA Using the Amsterdam Decoding and Modeling Toolbox (ADAM). Front. Neurosci. 2018, 12, 368. [Google Scholar] [CrossRef] [PubMed]
  38. Norman, K.A.; Polyn, S.M.; Detre, G.J.; Haxby, J.V. Beyond Mind-Reading: Multi-Voxel Pattern Analysis of fMRI Data. Trends Cogn. Sci. 2006, 10, 424–430. [Google Scholar] [CrossRef] [PubMed]
  39. Dirani, J.; Pylkkänen, L. The Time Course of Cross-Modal Representations of Conceptual Categories. NeuroImage 2023, 277, 120254. [Google Scholar] [CrossRef]
  40. Qiu, Z.; Li, X.; Pegna, A.J. Decoding Neural Patterns for the Processing of Fearful Faces under Different Visual Awareness Conditions: A Multivariate Pattern Analysis. Psychophysiology 2023, 60, e14368. [Google Scholar] [CrossRef]
  41. Nie, L.; Ku, Y. Decoding Emotion from High-Frequency Steady State Visual Evoked Potential (SSVEP). J. Neurosci. Methods 2023, 395, 109919. [Google Scholar] [CrossRef]
  42. Codispoti, M.; De Cesarei, A.; Ferrari, V. The Influence of Color on Emotional Perception of Natural Scenes. Psychophysiology 2012, 49, 11–16. [Google Scholar] [CrossRef]
  43. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A Flexible Statistical Power Analysis Program for the Social, Behavioral, and Biomedical Sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef]
  44. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1988. [Google Scholar]
  45. Bradley, M.M.; Lang, P.J. International Affective Picture System. In Encyclopedia of Personality and Individual Differences; Zeigler-Hill, V., Shackelford, T.K., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 1–4. [Google Scholar]
  46. Schneider, W.; Eschman, A.; Zuccolotto, A. E-Prime Reference Guide; Psychology Software Tools, Inc.: Pittsburgh, PA, USA, 2012. [Google Scholar]
  47. Loftus, G.R.; Duncan, J.; Gehrig, P. On the Time Course of Perceptual Information That Results from a Brief Visual Presentation. J. Exp. Psychol. Hum. Percept. Perform. 1992, 18, 530–549. [Google Scholar] [CrossRef]
  48. Lang, P.J. Behavioral Treatment and Bio-Behavioral Assessment: Computer Applications. In Technology in Mental Health Care Delivery Systems; Sidowski, J.B., Johnson, J.H., Williams, T.A., Eds.; Ablex Pub. Corp.: Norwood, NJ, USA, 1980; pp. 119–137. [Google Scholar]
  49. Carlson, T.A.; Grootswagers, T.; Robinson, A.K. An Introduction to Time-Resolved Decoding Analysis for M/EEG. arXiv 2019. [Google Scholar] [CrossRef]
  50. Li, Z.; Wang, J.; Chen, Y.; Li, Q.; Yin, S.; Chen, A. Attenuated Conflict Self-Referential Information Facilitating Conflict Resolution. npj Sci. Learn. 2024, 9, 47. [Google Scholar] [CrossRef] [PubMed]
  51. Vilidaite, G.; Marsh, E.; Baker, D.H. Internal Noise in Contrast Discrimination Propagates Forwards from Early Visual Cortex. NeuroImage 2019, 191, 503–517. [Google Scholar] [CrossRef] [PubMed]
  52. Van Driel, J.; Olivers, C.N.L.; Fahrenfort, J.J. High-Pass Filtering Artifacts in Multivariate Classification of Neural Time Series Data. J. Neurosci. Methods 2021, 352, 109080. [Google Scholar] [CrossRef] [PubMed]
  53. VanRullen, R.; Busch, N.A.; Drewes, J.; Dubois, J. Ongoing EEG Phase as a Trial-by-Trial Predictor of Perceptual and Attentional Variability. Front. Psychol. 2011, 2, 60. [Google Scholar] [CrossRef]
  54. Haufe, S.; Meinecke, F.; Görgen, K.; Dähne, S.; Haynes, J.-D.; Blankertz, B.; Bießmann, F. On the Interpretation of Weight Vectors of Linear Models in Multivariate Neuroimaging. NeuroImage 2014, 87, 96–110. [Google Scholar] [CrossRef]
  55. Bradley, A.P. The Use of the Area under the ROC Curve in the Evaluation of Machine Learning Algorithms. Pattern Recognit. 1997, 30, 1145–1159. [Google Scholar] [CrossRef]
  56. Maris, E.; Oostenveld, R. Nonparametric Statistical Testing of EEG- and MEG-Data. J. Neurosci. Methods 2007, 164, 177–190. [Google Scholar] [CrossRef]
  57. Schupp, H.; Cuthbert, B.; Bradley, M.; Hillman, C.; Hamm, A.; Lang, P. Brain Processes in Emotional Perception: Motivated Attention. Cogn. Emot. 2004, 18, 593–611. [Google Scholar] [CrossRef]
  58. Hajcak, G.; Dunning, J.P.; Foti, D. Neural Response to Emotional Pictures Is Unaffected by Concurrent Task Difficulty: An Event-Related Potential Study. Behav. Neurosci. 2007, 121, 1156–1162. [Google Scholar] [CrossRef]
  59. Wangelin, B.C.; Löw, A.; McTeague, L.M.; Bradley, M.M.; Lang, P.J. Aversive Picture Processing: Effects of a Concurrent Task on Sustained Defensive System Engagement. Psychophysiology 2011, 48, 112–116. [Google Scholar] [CrossRef]
  60. Codispoti, M.; De Cesarei, A.; Ferrari, V. Alpha-band Oscillations and Emotion: A Review of Studies on Picture Perception. Psychophysiology 2023, 60, e14438. [Google Scholar] [CrossRef]
  61. Open Science Collaboration. Estimating the Reproducibility of Psychological Science. Science 2015, 349, aac4716. [Google Scholar] [CrossRef] [PubMed]
  62. Schupp, H.T.; Flösch, K.-P.; Flaisch, T. A Case-by-Case Analysis of EPN and LPP Components within a “One-Picture-per-Emotion-Category” Protocol. Psychophysiology 2025, 62, e14718. [Google Scholar] [CrossRef] [PubMed]
  63. Schupp, H.T.; Kirmse, U.M. Case-by-case: Emotional Stimulus Significance and the Modulation of the EPN and LPP. Psychophysiology 2021, 58, e13766. [Google Scholar] [CrossRef] [PubMed]
  64. Loftus, G.R. Null Hypothesis. In Encyclopedia of Research Design; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2010. [Google Scholar]
  65. Lang, P.J.; Greenwald, M.K.; Bradley, M.M.; Hamm, A.O. Looking at Pictures: Affective, Facial, Visceral, and Behavioral Reactions. Psychophysiology 1993, 30, 261–273. [Google Scholar] [CrossRef]
  66. Codispoti, M.; Micucci, A.; De Cesarei, A. Time Will Tell: Object Categorization and Emotional Engagement during Processing of Degraded Natural Scenes. Psychophysiology 2021, 58, e13704. [Google Scholar] [CrossRef]
  67. De Cesarei, A.; Codispoti, M. Scene Identification and Emotional Response: Which Spatial Frequencies Are Critical? J. Neurosci. 2011, 31, 17052–17057. [Google Scholar] [CrossRef]
  68. Cano, M.E.; Class, Q.A.; Polich, J. Affective Valence, Stimulus Attributes, and P300: Color vs. Black/White and Normal vs. Scrambled Images. Int. J. Psychophysiol. 2009, 71, 17–24. [Google Scholar] [CrossRef]
  69. Bekhtereva, V.; Müller, M.M. Bringing Color to Emotion: The Influence of Color on Attentional Bias to Briefly Presented Emotional Images. Cogn. Affect. Behav. Neurosci. 2017, 17, 1028–1047. [Google Scholar] [CrossRef]
  70. Kuniecki, M.; Pilarczyk, J.; Wichary, S. The Color Red Attracts Attention in an Emotional Context. An ERP Study. Front. Hum. Neurosci. 2015, 9, 212. [Google Scholar] [CrossRef]
  71. De Cesarei, A.; Loftus, G.R.; Mastria, S.; Codispoti, M. Understanding Natural Scenes: Contributions of Image Statistics. Neurosci. Biobehav. Rev. 2017, 74, 44–57. [Google Scholar] [CrossRef]
  72. De Cesarei, A.; Codispoti, M. Spatial Frequencies and Emotional Perception. Rev. Neurosci. 2013, 24, 89–104. [Google Scholar] [CrossRef] [PubMed]
  73. Mastria, S.; Codispoti, M.; Tronelli, V.; De Cesarei, A. Subjective Affective Responses to Natural Scenes Require Understanding, Not Spatial Frequency Bands. Vision 2024, 8, 36. [Google Scholar] [CrossRef] [PubMed]
  74. Reisenzein, R.; Franikowski, P. On the Latency of Object Recognition and Affect: Evidence from Temporal Order and Simultaneity Judgments. J. Exp. Psychol. Gen. 2022, 151, 3060–3081. [Google Scholar] [CrossRef]
  75. Storbeck, J.; Robinson, M.D.; McCourt, M.E. Semantic Processing Precedes Affect Retrieval: The Neurological Case for Cognitive Primacy in Visual Processing. Rev. Gen. Psychol. 2006, 10, 41–55. [Google Scholar] [CrossRef]
  76. Baumeister, R.F.; Bratslavsky, E.; Finkenauer, C.; Vohs, K.D. Bad Is Stronger than Good. Rev. Gen. Psychol. 2001, 5, 323–370. [Google Scholar] [CrossRef]
  77. Poncet, E.; Nicolas, G.; Guyader, N.; Moro, E.; Campagne, A. Spatio-Temporal Attention toward Emotional Scenes across Adulthood. Emotion 2023, 23, 1726–1739. [Google Scholar] [CrossRef]
  78. Ritchie, J.B.; Kaplan, D.M.; Klein, C. Decoding the Brain: Neural Representation and the Limits of Multivariate Pattern Analysis in Cognitive Neuroscience. Br. J. Philos. Sci. 2019, 70, 581–607. [Google Scholar] [CrossRef]
  79. Marruthachalam, R.; Amudha, P.; Sivakumari, S. The Science of Emotion: Decoding and Analysis of Human Emotional Landscape. In Affective Computing for Social Good; Garg, M., Prasad, R.S., Eds.; The Springer Series in Applied Machine Learning; Springer Nature: Cham, Switzerland, 2024; pp. 1–20. [Google Scholar]
  80. King, J.-R.; Dehaene, S. Characterizing the Dynamics of Mental Representations: The Temporal Generalization Method. Trends Cogn. Sci. 2014, 18, 203–210. [Google Scholar] [CrossRef]
Figure 1. Schematic illustration of the analysis pipeline. The images displayed are not those used in the experiment and are provided for illustrative purposes only. (A) Flowchart of experimental recording and data analysis. (B) Affective condition decoding for pleasant vs. neutral conditions. All experimental conditions were included in the training set, comprising both greyscale and color images presented either briefly (24 ms) or for a longer duration (6 s). The same procedure was applied for unpleasant vs. neutral conditions. (C) Cross-decoding across colormap and exposure duration. In the color-to-greyscale (CtoG) analysis, only color images were used for training, while greyscale images were used for testing (the reversed training-testing partitioning was applied for greyscale-to-color). In the long-to-short (LtoS) analysis, images presented briefly (24 ms) were used for training, and images presented for a longer duration (6 s) were used for testing (the reversed training-testing partitioning was applied for greyscale to color).
Figure 1. Schematic illustration of the analysis pipeline. The images displayed are not those used in the experiment and are provided for illustrative purposes only. (A) Flowchart of experimental recording and data analysis. (B) Affective condition decoding for pleasant vs. neutral conditions. All experimental conditions were included in the training set, comprising both greyscale and color images presented either briefly (24 ms) or for a longer duration (6 s). The same procedure was applied for unpleasant vs. neutral conditions. (C) Cross-decoding across colormap and exposure duration. In the color-to-greyscale (CtoG) analysis, only color images were used for training, while greyscale images were used for testing (the reversed training-testing partitioning was applied for greyscale-to-color). In the long-to-short (LtoS) analysis, images presented briefly (24 ms) were used for training, and images presented for a longer duration (6 s) were used for testing (the reversed training-testing partitioning was applied for greyscale to color).
Applsci 16 01779 g001
Figure 2. Affective condition decoding. (A) Area under curve (AUC) corresponding to the diagonal decoding for pleasant vs. neutral (blue); unpleasant vs. neutral (red). Shaded areas represent the standard error of the mean, and thick bars indicate significant classification after cluster correction. Green bars at the top and the corresponding shaded grey areas below denote the two time windows of interest (150–300 ms and 400–800 ms). Dashed lines represent chance performance (horizontal) and stimulus onset (vertical). (B) Topographical maps of classifier weights for pleasant vs. neutral and unpleasant vs. neutral classification in the early and late time windows. Black dots represent significant electrodes after cluster correction.
Figure 2. Affective condition decoding. (A) Area under curve (AUC) corresponding to the diagonal decoding for pleasant vs. neutral (blue); unpleasant vs. neutral (red). Shaded areas represent the standard error of the mean, and thick bars indicate significant classification after cluster correction. Green bars at the top and the corresponding shaded grey areas below denote the two time windows of interest (150–300 ms and 400–800 ms). Dashed lines represent chance performance (horizontal) and stimulus onset (vertical). (B) Topographical maps of classifier weights for pleasant vs. neutral and unpleasant vs. neutral classification in the early and late time windows. Black dots represent significant electrodes after cluster correction.
Applsci 16 01779 g002
Figure 3. Cross-Decoding. (A) Area under curve (AUC) corresponding to the decoding of affective representations, pleasant vs. neutral (blue) and unpleasant vs. neutral (red), for color generalization (left: CtoG, GtoC) and for time generalization (right: LtoS, StoL). Shaded areas represent the standard error of the mean, and thick bars indicate significant classification after cluster correction. Green bars at the top and the corresponding shaded grey areas below denote the two time windows of interest (150–300 ms and 400–800 ms). Dashed lines represent chance performance (horizontal) and stimulus onset (vertical). (B) Topographical maps of classifier weights for pleasant vs. neutral and unpleasant vs. neutral classification in the early and late time windows. Black dots represent significant electrodes after cluster correction.
Figure 3. Cross-Decoding. (A) Area under curve (AUC) corresponding to the decoding of affective representations, pleasant vs. neutral (blue) and unpleasant vs. neutral (red), for color generalization (left: CtoG, GtoC) and for time generalization (right: LtoS, StoL). Shaded areas represent the standard error of the mean, and thick bars indicate significant classification after cluster correction. Green bars at the top and the corresponding shaded grey areas below denote the two time windows of interest (150–300 ms and 400–800 ms). Dashed lines represent chance performance (horizontal) and stimulus onset (vertical). (B) Topographical maps of classifier weights for pleasant vs. neutral and unpleasant vs. neutral classification in the early and late time windows. Black dots represent significant electrodes after cluster correction.
Applsci 16 01779 g003
Figure 4. Cross-Decoding across color (left) and exposure time (right). In each plot, the Area Under Curve (AUC) corresponding to the decoding of affective representations, pleasant vs. neutral (blue) and unpleasant vs. neutral (red), is shown. Shaded areas represent the standard error of the mean, and thick bars indicate significant classification after cluster correction. Green bars at the top and the corresponding shaded grey areas below denote the two time windows of interest (150–300 ms and 400–800 ms). Dashed lines represent chance performance (horizontal) and stimulus onset (vertical).
Figure 4. Cross-Decoding across color (left) and exposure time (right). In each plot, the Area Under Curve (AUC) corresponding to the decoding of affective representations, pleasant vs. neutral (blue) and unpleasant vs. neutral (red), is shown. Shaded areas represent the standard error of the mean, and thick bars indicate significant classification after cluster correction. Green bars at the top and the corresponding shaded grey areas below denote the two time windows of interest (150–300 ms and 400–800 ms). Dashed lines represent chance performance (horizontal) and stimulus onset (vertical).
Applsci 16 01779 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

De Cesarei, A.; Belluzzi, A.; Ferrari, V.; Codispoti, M. Affective EEG Decoding Generalizes Across Colormap and Exposure Time. Appl. Sci. 2026, 16, 1779. https://doi.org/10.3390/app16041779

AMA Style

De Cesarei A, Belluzzi A, Ferrari V, Codispoti M. Affective EEG Decoding Generalizes Across Colormap and Exposure Time. Applied Sciences. 2026; 16(4):1779. https://doi.org/10.3390/app16041779

Chicago/Turabian Style

De Cesarei, Andrea, Andrea Belluzzi, Vera Ferrari, and Maurizio Codispoti. 2026. "Affective EEG Decoding Generalizes Across Colormap and Exposure Time" Applied Sciences 16, no. 4: 1779. https://doi.org/10.3390/app16041779

APA Style

De Cesarei, A., Belluzzi, A., Ferrari, V., & Codispoti, M. (2026). Affective EEG Decoding Generalizes Across Colormap and Exposure Time. Applied Sciences, 16(4), 1779. https://doi.org/10.3390/app16041779

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop