Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = crossmodal congruency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 16051 KB  
Article
Research on fMRI Image Generation from EEG Signals Based on Diffusion Models
by Xiaoming Sun, Yutong Sun, Junxia Chen, Bochao Su, Tuo Nie and Ke Shui
Electronics 2025, 14(22), 4432; https://doi.org/10.3390/electronics14224432 (registering DOI) - 13 Nov 2025
Abstract
Amidrapid advances in intelligent medicine, decoding brain activity from electroencephalogram (EEG) signals has emerged as a critical technical frontier for brain–computer interfaces and medical AI systems. Given the inherent spatial resolution limitations of an EEG, researchers frequently integrate functional magnetic resonance imaging (fMRI) [...] Read more.
Amidrapid advances in intelligent medicine, decoding brain activity from electroencephalogram (EEG) signals has emerged as a critical technical frontier for brain–computer interfaces and medical AI systems. Given the inherent spatial resolution limitations of an EEG, researchers frequently integrate functional magnetic resonance imaging (fMRI) to enhance neural activity representation. However, fMRI acquisition is inherently complex. Consequently, efforts increasingly focus on cross-modal transformation methods that map EEG signals to fMRI data, thereby extending EEG applications in neural mechanism studies. The central challenge remains generating high-fidelity fMRI images from EEG signals. To address this, we propose a diffusion model-based framework for cross-modal EEG-to-fMRI generation. To address pronounced noise contamination in electroencephalographic (EEG) signals acquired via simultaneous recording systems and temporal misalignments between EEGs and functional magnetic resonance imaging (fMRI), we first apply Fourier transforms to EEG signals and perform dimensionality expansion. This constructs a spatiotemporally aligned EEG–fMRI paired dataset. Building on this foundation, we design an EEG encoder integrating a multi-layer recursive spectral attention mechanism with a residual architecture.In response to the limited dynamic mapping capabilities and suboptimal image quality prevalent in existing cross-modal generation research, we propose a diffusion-model-driven EEG-to-fMRI generation algorithm. This framework unifies the EEG feature encoder and a cross-modal interaction module within an end-to-end denoising U-Net architecture. By leveraging the diffusion process, EEG-derived features serve as conditional priors to guide fMRI reconstruction, enabling high-fidelity cross-modal image generation. Empirical evaluations on the resting-state NODDI dataset and the task-based XP-2 dataset demonstrate that our EEG encoder significantly enhances cross-modal representational congruence, providing robust semantic features for fMRI synthesis. Furthermore, the proposed cross-modal generative model achieves marked improvements in structural similarity, the root mean square error, and the peak signal-to-noise ratio in generated fMRI images, effectively resolving the nonlinear mapping challenge inherent in EEG–fMRI data. Full article
Show Figures

Figure 1

19 pages, 2318 KB  
Article
Modulating Multisensory Processing: Interactions Between Semantic Congruence and Temporal Synchrony
by Susan Geffen, Taylor Beck and Christopher W. Robinson
Vision 2025, 9(3), 74; https://doi.org/10.3390/vision9030074 - 1 Sep 2025
Viewed by 997
Abstract
Presenting information to multiple sensory modalities often facilitates or interferes with processing, yet the mechanisms remain unclear. Using a Stroop-like task, the two reported experiments examined how semantic congruency and incongruency in one sensory modality affect processing and responding in a different modality. [...] Read more.
Presenting information to multiple sensory modalities often facilitates or interferes with processing, yet the mechanisms remain unclear. Using a Stroop-like task, the two reported experiments examined how semantic congruency and incongruency in one sensory modality affect processing and responding in a different modality. Participants were presented with pictures and sounds simultaneously (Experiment 1) or asynchronously (Experiment 2) and had to respond whether the visual or auditory stimulus was an animal or vehicle, while ignoring the other modality. Semantic congruency and incongruency in the unattended modality both affected responses in the attended modality, with visual stimuli having larger effects on auditory processing than the reverse (Experiment 1). Effects of visual input on auditory processing decreased under longer SOAs, while effects of auditory input on visual processing increased over SOAs and were correlated with relative processing speed (Experiment 2). These results suggest that congruence and modality both impact multisensory processing. Full article
Show Figures

Figure 1

13 pages, 5950 KB  
Article
Temporal Electroencephalography Traits Dissociating Tactile Information and Cross-Modal Congruence Effects
by Yusuke Ozawa and Natsue Yoshimura
Sensors 2024, 24(1), 45; https://doi.org/10.3390/s24010045 - 21 Dec 2023
Cited by 1 | Viewed by 2177
Abstract
To explore whether temporal electroencephalography (EEG) traits can dissociate the physical properties of touching objects and the congruence effects of cross-modal stimuli, we applied a machine learning approach to two major temporal domain EEG traits, event-related potential (ERP) and somatosensory evoked potential (SEP), [...] Read more.
To explore whether temporal electroencephalography (EEG) traits can dissociate the physical properties of touching objects and the congruence effects of cross-modal stimuli, we applied a machine learning approach to two major temporal domain EEG traits, event-related potential (ERP) and somatosensory evoked potential (SEP), for each anatomical brain region. During a task in which participants had to identify one of two material surfaces as a tactile stimulus, a photo image that matched (‘congruent’) or mismatched (‘incongruent’) the material they were touching was given as a visual stimulus. Electrical stimulation was applied to the median nerve of the right wrist to evoke SEP while the participants touched the material. The classification accuracies using ERP extracted in reference to the tactile/visual stimulus onsets were significantly higher than chance levels in several regions in both congruent and incongruent conditions, whereas SEP extracted in reference to the electrical stimulus onsets resulted in no significant classification accuracies. Further analysis based on current source signals estimated using EEG revealed brain regions showing significant accuracy across conditions, suggesting that tactile-based object recognition information is encoded in the temporal domain EEG trait and broader brain regions, including the premotor, parietal, and somatosensory areas. Full article
(This article belongs to the Special Issue Sensing Brain Activity Using EEG and Machine Learning)
Show Figures

Figure 1

19 pages, 1900 KB  
Article
Effect of Target Semantic Consistency in Different Sequence Positions and Processing Modes on T2 Recognition: Integration and Suppression Based on Cross-Modal Processing
by Haoping Yang, Chunlin Yue, Cenyi Wang, Aijun Wang, Zonghao Zhang and Li Luo
Brain Sci. 2023, 13(2), 340; https://doi.org/10.3390/brainsci13020340 - 16 Feb 2023
Cited by 2 | Viewed by 2004
Abstract
In the rapid serial visual presentation (RSVP) paradigm, sound affects participants’ recognition of targets. Although many studies have shown that sound improves cross-modal processing, researchers have not yet explored the effects of sound semantic information with respect to different locations and processing modalities [...] Read more.
In the rapid serial visual presentation (RSVP) paradigm, sound affects participants’ recognition of targets. Although many studies have shown that sound improves cross-modal processing, researchers have not yet explored the effects of sound semantic information with respect to different locations and processing modalities after removing sound saliency. In this study, the RSVP paradigm was used to investigate the difference between attention under conditions of consistent and inconsistent semantics with the target (Experiment 1), as well as the difference between top-down (Experiment 2) and bottom-up processing (Experiment 3) for sounds with consistent semantics with target 2 (T2) at different sequence locations after removing sound saliency. The results showed that cross-modal processing significantly improved attentional blink (AB). The early or lagged appearance of sounds consistent with T2 did not affect participants’ judgments in the exogenous attentional modality. However, visual target judgments were improved with endogenous attention. The sequential location of sounds consistent with T2 influenced the judgment of auditory and visual congruency. The results illustrate the effects of sound semantic information in different locations and processing modalities. Full article
(This article belongs to the Section Neurolinguistics)
Show Figures

Figure 1

15 pages, 1852 KB  
Article
An Exploration of the Effects of Cross-Modal Tasks on Selective Attention
by Krithika Nambiar and Pranesh Bhargava
Behav. Sci. 2023, 13(1), 51; https://doi.org/10.3390/bs13010051 - 6 Jan 2023
Cited by 3 | Viewed by 3378
Abstract
Successful performance of a task relies on selectively attending to the target, while ignoring distractions. Studies on perceptual load theory (PLT), conducted involving independent tasks with visual and auditory modalities, have shown that if a task is low-load, distractors and the target are [...] Read more.
Successful performance of a task relies on selectively attending to the target, while ignoring distractions. Studies on perceptual load theory (PLT), conducted involving independent tasks with visual and auditory modalities, have shown that if a task is low-load, distractors and the target are both processed. If the task is high-load, distractions are not processed. The current study expands these findings by considering the effect of cross-modality (target and distractor from separate modalities) and congruency (similarity of target and distractor) on selective attention, using a word-identification task. Parameters were analysed, including response time, accuracy rates, congruency of distractions, and subjective report of load. In contrast to past studies on PLT, the results of the current study show that modality (congruency of the distractors) had a significant effect and load had no effect on selective attention. This study demonstrates that subjective measurement of load is important when studying perceptual load and selective attention. Full article
Show Figures

Figure 1

17 pages, 2740 KB  
Article
Audiovisual Emotional Congruency Modulates the Stimulus-Driven Cross-Modal Spread of Attention
by Minran Chen, Song Zhao, Jiaqi Yu, Xuechen Leng, Mengdie Zhai, Chengzhi Feng and Wenfeng Feng
Brain Sci. 2022, 12(9), 1229; https://doi.org/10.3390/brainsci12091229 - 10 Sep 2022
Cited by 4 | Viewed by 2922
Abstract
It has been reported that attending to stimuli in visual modality can spread to task-irrelevant but synchronously presented stimuli in auditory modality, a phenomenon termed the cross-modal spread of attention, which could be either stimulus-driven or representation-driven depending on whether the visual constituent [...] Read more.
It has been reported that attending to stimuli in visual modality can spread to task-irrelevant but synchronously presented stimuli in auditory modality, a phenomenon termed the cross-modal spread of attention, which could be either stimulus-driven or representation-driven depending on whether the visual constituent of an audiovisual object is further selected based on the object representation. The stimulus-driven spread of attention occurs whenever a task-irrelevant sound synchronizes with an attended visual stimulus, regardless of the cross-modal semantic congruency. The present study recorded event-related potentials (ERPs) to investigate whether the stimulus-driven cross-modal spread of attention could be modulated by audio-visual emotional congruency in a visual oddball task where emotion (positive/negative) was task-irrelevant. The results first demonstrated a prominent stimulus-driven spread of attention regardless of audio-visual emotional congruency by showing that for all audiovisual pairs, the extracted ERPs to the auditory constituents of audiovisual stimuli within the time window of 200–300 ms were significantly larger than ERPs to the same auditory stimuli delivered alone. However, the amplitude of this stimulus-driven auditory Nd component during 200–300 ms was significantly larger for emotionally incongruent than congruent audiovisual stimuli when their visual constituents’ emotional valences were negative. Moreover, the Nd was sustained during 300–400 ms only for the incongruent audiovisual stimuli with emotionally negative visual constituents. These findings suggest that although the occurrence of the stimulus-driven cross-modal spread of attention is independent of audio-visual emotional congruency, its magnitude is nevertheless modulated even when emotion is task-irrelevant. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

18 pages, 302 KB  
Article
The One Thing You Need to Change Is Emotions: The Effect of Multi-Sensory Marketing on Consumer Behavior
by Moein Abdolmohamad Sagha, Nader Seyyedamiri, Pantea Foroudi and Morteza Akbari
Sustainability 2022, 14(4), 2334; https://doi.org/10.3390/su14042334 - 18 Feb 2022
Cited by 24 | Viewed by 27930
Abstract
Retailers are increasingly aware of the importance of store atmosphere on consumers’ emotions. The results of four experimental studies demonstrate that the sensory cues by which customers sense products and the amount of (in)congruency among the sensory stimuli of the products affect consumers’ [...] Read more.
Retailers are increasingly aware of the importance of store atmosphere on consumers’ emotions. The results of four experimental studies demonstrate that the sensory cues by which customers sense products and the amount of (in)congruency among the sensory stimuli of the products affect consumers’ emotions, willingness to purchase, and experience. In the presence of moderators such as colors, jingles, prices, and scent imagery, when facing sensory-rich experiential products (e.g., juice, coffee, hamburger, soda) with different sensory cues, consumers’ emotions, willingness to purchase, and experience depend on affective primacy and sensory congruency. The results (1) facilitate an improved consideration of the role of the interaction of sensory cues on customer emotions, (2) have consequences for outcomes linked with sensory congruency and affective primacy, and (3) help clarify possible incoherence in preceding studies on cross-modal outcomes in the setting of multi-sensory marketing. Full article
14 pages, 1669 KB  
Article
Crossmodal Semantic Congruence Interacts with Object Contextual Consistency in Complex Visual Scenes to Enhance Short-Term Memory Performance
by Erika Almadori, Serena Mastroberardino, Fabiano Botta, Riccardo Brunetti, Juan Lupiáñez, Charles Spence and Valerio Santangelo
Brain Sci. 2021, 11(9), 1206; https://doi.org/10.3390/brainsci11091206 - 13 Sep 2021
Cited by 12 | Viewed by 3764
Abstract
Object sounds can enhance the attentional selection and perceptual processing of semantically-related visual stimuli. However, it is currently unknown whether crossmodal semantic congruence also affects the post-perceptual stages of information processing, such as short-term memory (STM), and whether this effect is modulated by [...] Read more.
Object sounds can enhance the attentional selection and perceptual processing of semantically-related visual stimuli. However, it is currently unknown whether crossmodal semantic congruence also affects the post-perceptual stages of information processing, such as short-term memory (STM), and whether this effect is modulated by the object consistency with the background visual scene. In two experiments, participants viewed everyday visual scenes for 500 ms while listening to an object sound, which could either be semantically related to the object that served as the STM target at retrieval or not. This defined crossmodal semantically cued vs. uncued targets. The target was either in- or out-of-context with respect to the background visual scene. After a maintenance period of 2000 ms, the target was presented in isolation against a neutral background, in either the same or different spatial position as in the original scene. The participants judged the same vs. different position of the object and then provided a confidence judgment concerning the certainty of their response. The results revealed greater accuracy when judging the spatial position of targets paired with a semantically congruent object sound at encoding. This crossmodal facilitatory effect was modulated by whether the target object was in- or out-of-context with respect to the background scene, with out-of-context targets reducing the facilitatory effect of object sounds. Overall, these findings suggest that the presence of the object sound at encoding facilitated the selection and processing of the semantically related visual stimuli, but this effect depends on the semantic configuration of the visual scene. Full article
(This article belongs to the Section Sensory and Motor Neuroscience)
Show Figures

Figure 1

15 pages, 2641 KB  
Article
Hands Ahead in Mind and Motion: Active Inference in Peripersonal Hand Space
by Johannes Lohmann, Anna Belardinelli and Martin V. Butz
Vision 2019, 3(2), 15; https://doi.org/10.3390/vision3020015 - 18 Apr 2019
Cited by 13 | Viewed by 4548
Abstract
According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize [...] Read more.
According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize them, dependent on the involved predicted uncertainties before actual motor control unfolds. Accordingly, we asked whether peripersonal hand space is remapped in an uncertainty anticipating manner while grasping and placing bottles in a virtual reality (VR) setup. To investigate, we combined the crossmodal congruency paradigm with virtual object interactions in two experiments. As expected, an anticipatory crossmodal congruency effect (aCCE) at the future finger position on the bottle was detected. Moreover, a manipulation of the visuo-motor mapping of the participants’ virtual hand while approaching the bottle selectively reduced the aCCE at movement onset. Our results support theories of event-predictive, anticipatory behavior control and active inference, showing that expected uncertainties in movement control indeed influence anticipatory stimulus processing. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Figure 1

12 pages, 1648 KB  
Review
Does the Shape of the Drinking Receptacle Influence Taste/Flavour Perception? A Review
by Charles Spence and George Van Doorn
Beverages 2017, 3(3), 33; https://doi.org/10.3390/beverages3030033 - 6 Jul 2017
Cited by 31 | Viewed by 19062
Abstract
In this review, we summarize the latest evidence demonstrating that the shape and feel of the glassware (and other receptacles) that we drink from can influence our perception of the taste/flavour of the contents. Such results, traditionally obtained in the world of wine, [...] Read more.
In this review, we summarize the latest evidence demonstrating that the shape and feel of the glassware (and other receptacles) that we drink from can influence our perception of the taste/flavour of the contents. Such results, traditionally obtained in the world of wine, have often been interpreted in terms of changes in physico-chemical properties (resulting from the retention, or release, of specific volatile aromatic molecules), or the differing ways in which the shape of the glassware funnels the flow of the liquid across the tongue. It is, however, not always clear that any such physico-chemical differences do, in fact, lead to perceptible differences. Others, meanwhile, have stressed the importance of cultural factors, and the perceived appropriateness, or congruency, of the receptacle to the drink, based on prior experience. Here, though, we argue that there is also a much more fundamental association at work between shape properties and taste/flavour. In particular, the suggestion is made that the shape properties of the drinking receptacle (e.g., whether it be more rounded or angular)—regardless of whether the receptacle is seen, felt, or both—can prime certain expectations in the mind of the drinker. And, based on the theory of crossmodal correspondence, this priming is thought to accentuate certain aspects of the tasting experience, likely as a result of a taster’s attention being focused on the attributes that have been subtly primed. Full article
Show Figures

Figure 1

12 pages, 485 KB  
Article
Cross-Modal Sensory Integration of Visual-Tactile Motion Information: Instrument Design and Human Psychophysics
by Yu-Cheng Pei, Ting-Yu Chang, Tsung-Chi Lee, Sudipta Saha, Hsin-Yi Lai, Manuel Gomez-Ramirez, Shih-Wei Chou and Alice M. K. Wong
Sensors 2013, 13(6), 7212-7223; https://doi.org/10.3390/s130607212 - 31 May 2013
Cited by 10 | Viewed by 7820
Abstract
Information obtained from multiple sensory modalities, such as vision and touch, is integrated to yield a holistic percept. As a haptic approach usually involves cross-modal sensory experiences, it is necessary to develop an apparatus that can characterize how a biological system integrates visual-tactile [...] Read more.
Information obtained from multiple sensory modalities, such as vision and touch, is integrated to yield a holistic percept. As a haptic approach usually involves cross-modal sensory experiences, it is necessary to develop an apparatus that can characterize how a biological system integrates visual-tactile sensory information as well as how a robotic device infers object information emanating from both vision and touch. In the present study, we develop a novel visual-tactile cross-modal integration stimulator that consists of an LED panel to present visual stimuli and a tactile stimulator with three degrees of freedom that can present tactile motion stimuli with arbitrary motion direction, speed, and indentation depth in the skin. The apparatus can present cross-modal stimuli in which the spatial locations of visual and tactile stimulations are perfectly aligned. We presented visual-tactile stimuli in which the visual and tactile directions were either congruent or incongruent, and human observers reported the perceived visual direction of motion. Results showed that perceived direction of visual motion can be biased by the direction of tactile motion when visual signals are weakened. The results also showed that the visual-tactile motion integration follows the rule of temporal congruency of multi-modal inputs, a fundamental property known for cross-modal integration. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Back to TopTop