Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (115)

Search Parameters:
Keywords = audiovisual perception

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1754 KB  
Article
Effects of Acoustic and Visual Environmental Factors on Perceived Street Vitality in Historic Districts: A Case Study of Shangxiahang, Fuzhou
by Jiaqi Chen, Qiqi Zhang, Xinchen Li, Jiaying Weng, Yuxi Cao and Jing Ye
Buildings 2026, 16(9), 1712; https://doi.org/10.3390/buildings16091712 - 26 Apr 2026
Viewed by 157
Abstract
In historic districts, the audiovisual environment plays an important role in shaping both cultural expression and spatial experience. However, the influence of acoustic and visual environmental factors on perceived street vitality remains insufficiently understood. Taking the Shangxiahang Historic District in Fuzhou as a [...] Read more.
In historic districts, the audiovisual environment plays an important role in shaping both cultural expression and spatial experience. However, the influence of acoustic and visual environmental factors on perceived street vitality remains insufficiently understood. Taking the Shangxiahang Historic District in Fuzhou as a case study, this paper employs on-site sound pressure level measurements, panoramic visual data collection, questionnaire surveys, principal component analysis, correlation analysis, and multiple regression analysis to systematically examine the effects of acoustic and visual environmental factors on perceived street vitality. The results indicate that traditional cultural sounds and natural sounds have a significant positive impact on perceived street vitality, while construction noise and tour guide’s horn sound exhibit negative effects. Regarding the visual environment, street and alley spaces, traditional architecture, greenery, and the sky are all important factors in promoting perceived street vitality. Further regression analysis reveals that the perception rate of street and alley spaces has the strongest influence, followed by the perception rate of traditional architecture, the perceived frequency of folk activity sounds, preference for greenery, and the perception rate of the sky. These findings demonstrate that perceived street vitality in historic districts does not depend on a single environmental factor but rather arises from synergistic interaction between culturally meaningful acoustic cues and legible spatial forms. These results offer practical implications for multisensory design and vitality-oriented regeneration in historic districts. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
23 pages, 3022 KB  
Article
Pedestrian Physiological Response Map Prediction Model for Street Audiovisual Environments Using LSTM Networks
by Jingwen Xing, Xuyuan He, Xinxin Li, Tianci Wang, Siqing Mao and Luyao Li
Buildings 2026, 16(9), 1648; https://doi.org/10.3390/buildings16091648 - 22 Apr 2026
Viewed by 164
Abstract
Existing studies of street-related emotional perception mainly rely on static scene evaluations, which cannot capture the cumulative effects of environmental exposure during continuous walking. To address this limitation, this study proposes a method for predicting pedestrian physiological responses in sequential audiovisual street environments. [...] Read more.
Existing studies of street-related emotional perception mainly rely on static scene evaluations, which cannot capture the cumulative effects of environmental exposure during continuous walking. To address this limitation, this study proposes a method for predicting pedestrian physiological responses in sequential audiovisual street environments. Four real-world walking routes were selected, with outbound and return directions treated as independent paths, yielding eight paths and 32 valid samples. EEG, ECG, sound pressure level, first-person video, and GPS data were synchronously collected to construct a 1 s multimodal time-series dataset. Pearson correlation, Kendall correlation, and mutual information analyses were used to examine linear, monotonic, and nonlinear relationships between environmental variables and physiological indicators, and the resulting weights were incorporated into a Long Short-Term Memory (LSTM) model for multi-step prediction. Visual elements and noise exposure were the main factors influencing physiological responses. Among the models, the mutual-information-weighted LSTM performed best, achieving an R2 of 0.77 for heart rate variability (RMSSD), whereas prediction of the EEG ratio (β/α and θ/β) remained limited. An additional independent street sample outside the training set was then used to generate a dual-dimensional EEG-ECG physiological response map, demonstrating the model’s potential for identifying emotional risk segments and supporting street-level micro-renewal. Full article
18 pages, 1492 KB  
Systematic Review
Effects of Visual and Spatial Factors on Classical Music Listening: A Systematic Review
by Carlo-Ferdinando de Nardis, Mariangela De Vita and Alessio Gabriele
Architecture 2026, 6(2), 66; https://doi.org/10.3390/architecture6020066 - 20 Apr 2026
Viewed by 229
Abstract
This paper presents a systematic review, conducted in accordance with PRISMA guidelines, synthesising evidence on how visual and spatial features of classical concert settings—such as performer visibility, seating position and sightlines, stage layout, lighting, and vibrotactile cues—shape listeners’ engagement and judgments. RILM, APA [...] Read more.
This paper presents a systematic review, conducted in accordance with PRISMA guidelines, synthesising evidence on how visual and spatial features of classical concert settings—such as performer visibility, seating position and sightlines, stage layout, lighting, and vibrotactile cues—shape listeners’ engagement and judgments. RILM, APA PsycNet, PubMed, and Scopus were searched for peer-reviewed experimental studies that manipulated or compared visual/spatial dimensions and reported subjective or physiological outcomes relevant to live, non-amplified contexts. Titles, abstracts, and full texts were screened, and data were extracted and analysed with respect to study design, stimulus environment, outcome measures, and main effects. Heterogeneity across studies precluded meta-analysis; therefore, a narrative synthesis was conducted. A total of 23 publications—22 experiments and one meta-analysis—met the inclusion criteria: the reviewed studies primarily examined issues related to visual presence and spatial configuration. Most studies relied on laboratory or home-based audiovisual reproductions, with only one study collecting data in a naturalistic performance setting. The evidence is limited by methodological heterogeneity, the predominance of simulated environments, and variability in outcome measures. Overall, visual and spatial factors substantially shape classical music listening and the audience experience, underscoring the need for more field-based and methodologically standardised research. Full article
(This article belongs to the Special Issue Integration of Acoustics into Architectural Design)
Show Figures

Figure 1

16 pages, 1896 KB  
Article
ERP Evidence for Cross-Modal Effects on Attractiveness Perception
by Qi Zhang, Linyan Wang and Weijun Li
Brain Sci. 2026, 16(4), 402; https://doi.org/10.3390/brainsci16040402 - 9 Apr 2026
Viewed by 412
Abstract
Background: Attractiveness plays an important role in social interactions. However, it remains unclear whether presenting attractiveness information across multiple sensory modalities facilitates attractiveness evaluation, and how cross-modal congruency modulates this process. Methods: The present study used event-related potentials (ERPs) to investigate these questions. [...] Read more.
Background: Attractiveness plays an important role in social interactions. However, it remains unclear whether presenting attractiveness information across multiple sensory modalities facilitates attractiveness evaluation, and how cross-modal congruency modulates this process. Methods: The present study used event-related potentials (ERPs) to investigate these questions. Participants judged the attractiveness of voices presented alone or paired with faces that were congruent or incongruent in attractiveness. Results: Significant differences were found between unimodal and audiovisual conditions, as well as between congruent and incongruent pairs, during both early perceptual (N1) and later evaluative (P3) stages. Both audiovisual conditions elicited larger N1 amplitudes than the auditory-only condition, and congruent pairs produced larger N1 amplitudes than incongruent pairs. At a later stage, the auditory-only condition produced larger P3 amplitudes than the audiovisual conditions. Furthermore, the interaction between voice attractiveness and visual context on P3 amplitudes was significant. Audiovisual incongruent pairs elicited larger P3 amplitudes than congruent pairs, but only when the voice was unattractive. Conclusions: The results demonstrate that redundant visual cues of attractiveness both accelerate and alter the perception of auditory attractiveness. These audiovisual integration effects occur across different processing stages and may reflect enhanced processing efficiency in multisensory social perception. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

16 pages, 2595 KB  
Article
Drone Rider: Effects of Wind Conditions on the Sense of Flight
by Hanyi Yang, Shogo Okamoto and Hong Shen
Appl. Sci. 2026, 16(7), 3544; https://doi.org/10.3390/app16073544 - 4 Apr 2026
Viewed by 372
Abstract
Recent advances in extended reality (XR) have enabled immersive virtual flight experiences for applications such as entertainment and teleoperation support. However, XR-based flight systems that rely primarily on audiovisual cues often fail to evoke a compelling sense of flight and embodied sensation. This [...] Read more.
Recent advances in extended reality (XR) have enabled immersive virtual flight experiences for applications such as entertainment and teleoperation support. However, XR-based flight systems that rely primarily on audiovisual cues often fail to evoke a compelling sense of flight and embodied sensation. This study investigates how adaptive wind feedback enhances subjective flight perception in a virtual flight simulation system, Drone Rider. We implemented direction- and velocity-adaptive wind feedback that synchronizes airflow intensity and direction with the user’s motion in the virtual environment, focusing on perceptual effects in a controlled manner to identify key design factors, rather than reproducing aerodynamically accurate airflow. To explore flexible system configurations, two fan installation positions were compared: front-mounted and bottom-mounted. A questionnaire-based user study revealed that adaptive wind feedback significantly enhanced the sense of flight, self-location, and agency compared with the constant-wind and no-wind conditions. However, no significant differences were observed between velocity-adaptive wind and direction- and velocity-adaptive wind conditions. Furthermore, wind delivered from beneath the user yielded flight sensations comparable to those generated by front-mounted airflow. These findings suggest that temporal coupling between airflow intensity and visual motion plays a central role in XR flight perception and provide practical design insights for immersive and flexible XR-based flight simulation systems. Full article
Show Figures

Figure 1

16 pages, 1715 KB  
Article
A Human–System Coupling Framework for Collective Synchronization Through Computational Interpretation of Bodily Energy
by Hsuan-Cheng Lin
Appl. Sci. 2026, 16(7), 3516; https://doi.org/10.3390/app16073516 - 3 Apr 2026
Viewed by 209
Abstract
This paper proposes a human–system coupling framework for understanding interactive environments in which embodied human activity is continuously translated into perceptual feedback through computational systems. Rather than conceptualizing interaction as a sequence of discrete commands, the framework interprets interactive systems as perceptual mediation [...] Read more.
This paper proposes a human–system coupling framework for understanding interactive environments in which embodied human activity is continuously translated into perceptual feedback through computational systems. Rather than conceptualizing interaction as a sequence of discrete commands, the framework interprets interactive systems as perceptual mediation environments linking bodily action, computational interpretation, and perceptual response. The framework is illustrated through the EchoCycle installation, which converts mechanical energy generated by cycling into real-time audiovisual feedback. Observations from the installation suggest that participants initially engage in exploratory behavior and gradually develop more stable activity patterns as they adapt to the feedback provided by the system. In shared interaction contexts, the perceptual environment reflects collective activity, creating conditions under which behavioral alignment among participants may emerge. By framing interactive systems as continuous perception–action loops, this study highlights how computational mediation can shape both individual adaptation and collective interaction dynamics. The proposed framework contributes to human–computer interaction and interactive system design by offering an integrated perspective on embodied action, perceptual feedback, and responsive environments. Full article
Show Figures

Figure 1

19 pages, 256 KB  
Article
Blurred Lines: Exploring Bisexual Identity in the Face of Invalidation in a Spanish-Speaking Sample
by Alejandro Kepp Termini and Marta Evelia Aparicio-García
Sexes 2026, 7(2), 16; https://doi.org/10.3390/sexes7020016 - 26 Mar 2026
Viewed by 1120
Abstract
(1) Background: This article explores the qualitative dimensions of bisexual identity through the lived experiences of bisexual individuals. (2) Methods: Drawing on an online questionnaire completed by 226 participants from a Spanish-speaking sample, the study uses a grounded theory-based analysis of participant narratives. [...] Read more.
(1) Background: This article explores the qualitative dimensions of bisexual identity through the lived experiences of bisexual individuals. (2) Methods: Drawing on an online questionnaire completed by 226 participants from a Spanish-speaking sample, the study uses a grounded theory-based analysis of participant narratives. (3) Results: The analysis identifies key components of bisexual identity, such as self-recognition, fluidity, and community belonging, as well as recurrent experiences of invalidation, promiscuity stereotypes, and intracommunity discrimination. The findings highlight the processes by which participants navigate and define their bisexuality, emphasizing the interaction between personal introspection, contact with audiovisual media, societal perceptions, and external validation in identity formation. (4) Conclusions: These results provide a nuanced exploration of how bisexual identities are constructed amid persistent challenges of invalidation, erasure, and limited community recognition. Full article
32 pages, 7928 KB  
Article
eXCube2: Explainable Brain-Inspired Spiking Neural Network Framework for Emotion Recognition from Audio, Visual and Multimodal Audio–Visual Data
by N. K. Kasabov, A. Yang, Z. Wang, I. Abouhassan, A. Kassabova and T. Lappas
Biomimetics 2026, 11(3), 208; https://doi.org/10.3390/biomimetics11030208 - 14 Mar 2026
Viewed by 611
Abstract
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube [...] Read more.
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube that is spatially structured according to a human brain template. The BIAI models developed in eXCube2 are trainable on spatio- and spectro-temporal data using brain-inspired learning rules. Such models are explainable in terms of revealing patterns in data and are adaptable to new data. The eXCube2 models are implemented as software systems and tested on speech and video data of subjects expressing emotional states. The use of a brain template for the SNN structure enables brain-inspired tonotopic and stereo mapping of audio inputs, topographic mapping of visual data, and the combined use of both modalities. This novel approach brings AI-based emotional state recognition closer to human perception, provides a better explainability and adaptability than existing AI systems. It also results in a higher or competitive accuracy, even though this was not the main goal here. This is demonstrated through experiments on benchmark datasets, achieving classification accuracy above 80% on single-modality data and 88.9% when multimodal audio–visual data are used, and a “don’t know” output is introduced. The paper further discusses possible applications of the proposed eXCube2 framework to other audio, visual, and audio–visual data for solving challenging problems, such as recognizing emotional states of people from different origins; brain state diagnosis (e.g., Parkinson’s disease, Alzheimer’s disease, ADHD, dementia); measuring response to treatment over time; evaluating satisfaction responses from online clients; cognitive robotics; human–robot interaction; chatbots; and interactive computer games. The SNN-based implementation of BIAI also enables the use of neuromorphic chips and platforms, leading to reduced power consumption, smaller device size, higher performance accuracy, and improved adaptability and explainability. This research shows a step toward building brain-inspired AI systems. Full article
Show Figures

Figure 1

27 pages, 12591 KB  
Article
Audio–Visual Fusion Sim2Real Platform for Anti-UAV Detection and Tracking
by Xiaohong Nian, Haolun Liu and Xunhua Dai
Drones 2026, 10(3), 190; https://doi.org/10.3390/drones10030190 - 10 Mar 2026
Viewed by 997
Abstract
To address the escalating security challenges posed by unauthorized Unmanned Aerial Vehicles, this paper presents a Sim2real physics-informed audio–visual fusion simulation platform designed to enhance Counter-Unmanned Aerial Vehicle detection and tracking performance. The proposed method integrates two complementary sensing pipelines: a physics-based acoustic [...] Read more.
To address the escalating security challenges posed by unauthorized Unmanned Aerial Vehicles, this paper presents a Sim2real physics-informed audio–visual fusion simulation platform designed to enhance Counter-Unmanned Aerial Vehicle detection and tracking performance. The proposed method integrates two complementary sensing pipelines: a physics-based acoustic localization system utilizing Time Difference of Arrival principles and a deep learning-driven visual detection framework. To ensure robust surveillance against non-cooperative targets, these pipelines are not only fused through strict spatiotemporal synchronization but also mutually reinforce each other—acoustic data guides visual attention in low-visibility scenarios typical of adversarial intrusions, while visual detections refine acoustic parameter estimation. Building upon prior work in multi-modal perception, we extend the framework to dynamic environments characterized by configurable visual obstructions, including smoke and fog, which frequently compromise conventional optical anti-drone systems. Experiments demonstrate that the fusion system progressively adapts to degraded visual conditions, extending tracking continuity from approximately 50% coverage under vision-only operation to near-continuous target awareness, with a moderate trade-off in average angular precision when acoustic-only segments are included. Physical validation with quadrotor Unmanned Aerial Vehicles confirms the platform’s capability to bridge simulation-to-reality gaps. Our results highlight the system’s robustness against sensor degradation and its potential to accelerate the development of resilient multisensor Counter-Unmanned Aerial Vehicle systems while reducing dependency on costly field testing. Full article
Show Figures

Figure 1

23 pages, 8571 KB  
Article
Audiovisual Modulation of Traffic Noise Effects on Psychological Restoration in Expressway-Adjacent Residential Environments: A Virtual Reality Study
by Tongfei Jin, Zhoutao Zhang and Yuhan Shao
Buildings 2026, 16(4), 873; https://doi.org/10.3390/buildings16040873 - 21 Feb 2026
Viewed by 467
Abstract
Expressway traffic noise poses a critical threat to public health in developed high-density cities, causing chronic environmental stress in adjacent residential areas. While physical noise barriers are commonly used, the potential of audiovisual interactions in mitigating the adverse effects of traffic noise remains [...] Read more.
Expressway traffic noise poses a critical threat to public health in developed high-density cities, causing chronic environmental stress in adjacent residential areas. While physical noise barriers are commonly used, the potential of audiovisual interactions in mitigating the adverse effects of traffic noise remains under-explored. Using immersive virtual reality (VR), this study examined the efficacy of visual greenery and auditory masking (birdsong) in promoting stress recovery, and tested whether audiovisual perception mediates the environment–restoration link. Following an acute stressor, 100 participants were randomly assigned to a 2 × 2 between-subjects experiment manipulating Green View Index (high vs. low) and soundscape composition (traffic noise vs. traffic noise plus birdsong), with 25 participants in each group. Restorative outcomes were assessed using self-reported measures and continuous physiological monitoring (heart rate variability [HRV] and electrodermal activity [EDA]). Results demonstrated that high-intensity visual greenery and natural sounds effectively enhance psychological restoration in noise-affected environments. Structural equation modeling revealed that audiovisual perception fully mediated the relationship between environmental features and restorative outcomes. The physiological outcome showed a distinct tiered restoration pattern, indicating that immediate psychological buffering can be achieved through natural sounds, while consistent visual reinforcement remained essential for deep physiological recovery. Consequently, soundscape planning in expressway-adjacent zones should integrate visual greening strategies to optimize the perceptual masking of traffic noise and enhance the environmental quality. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

18 pages, 1514 KB  
Article
Exploring the Effects of a Computerized Naming Intervention Combined with Cerebellar tDCS in Cantonese-Speaking Individuals with Aphasia
by Maria Teresa Carthery-Goulart, Ada Chu, Anthony Pak-Hin Kong and Mehdi Bakhtiar
Brain Sci. 2026, 16(2), 137; https://doi.org/10.3390/brainsci16020137 - 28 Jan 2026
Viewed by 589
Abstract
Background/Objectives: This study examined the effects of a computerized naming intervention combined with either cerebellar anodal transcranial direct-current stimulation (A-tDCS) or sham (S-tDCS) on noun and verb naming in Cantonese-speaking persons with chronic stroke-related aphasia (PWA). Methods: A double-blind, randomized, crossover, [...] Read more.
Background/Objectives: This study examined the effects of a computerized naming intervention combined with either cerebellar anodal transcranial direct-current stimulation (A-tDCS) or sham (S-tDCS) on noun and verb naming in Cantonese-speaking persons with chronic stroke-related aphasia (PWA). Methods: A double-blind, randomized, crossover, sham-controlled clinical trial was conducted with six Cantonese-speaking PWA following stroke. Participants received a 60 min computerized naming intervention incorporating audio–visual speech perception cues over five consecutive days, paired with concurrent 20 min of either 2 mA cerebellar A-tDCS or S-tDCS. Generalized linear mixed-effects models (GLMM) and linear mixed-effects models (LME) were used to evaluate naming accuracy and reaction time (RT). Individual variability was further explored through single-case analyses of naming accuracy changes across conditions and grammatical categories. Results: The GLMM showed a significant three-way interaction of condition, grammatical category, and time (p < 0.05). Specifically, the intervention paired with S-tDCS significantly improved verb naming, whereas A-tDCS did not induce significant improvements at the group level, effectively showing significantly smaller gains regarding verb naming compared to S-tDCS. Overall, RT decreased post-treatment across groups, but no significant differences emerged by the tDCS condition. The results support the promising efficacy of the Cantonese computerized audio–visual noun and verb naming therapy. Single-case analyses revealed high inter-individual variability in response to neuromodulation effects on naming and behavioral treatment outcomes. Conclusions: This study contributes to the emerging literature on cerebellar neuromodulation in post-stroke aphasia and underscores the need for larger trials examining grammar-specific (particularly verb-related) effects and polarity-dependent outcomes. It also highlights the value of developing personalized neuromodulation protocols to optimize the efficacy of behavioral language interventions in people with aphasia. Full article
(This article belongs to the Section Neurolinguistics)
Show Figures

Figure 1

12 pages, 589 KB  
Article
Inclusive and Sustainable Digital Innovation Within the Amara Berri System
by Ana Belén Olmos Ortega, Cristina Medrano Pascual, Rosa Ana Alonso Ruiz, María García Pérez and María Ángeles Valdemoros San Emeterio
Sustainability 2026, 18(2), 947; https://doi.org/10.3390/su18020947 - 16 Jan 2026
Viewed by 439
Abstract
The current debate on digital education is at a crossroads between the need for technological innovation and the growing concern about the impact of passive screen use. In this context, identifying sustainable pedagogical models that integrate Information and Communication Technologies (ICT) in a [...] Read more.
The current debate on digital education is at a crossroads between the need for technological innovation and the growing concern about the impact of passive screen use. In this context, identifying sustainable pedagogical models that integrate Information and Communication Technologies (ICT) in a meaningful and inclusive way is an urgent need. This article presents a case study of the Amara Berri System (ABS), aiming to analyze how inclusive and sustainable digital innovation is operationalized within the system and whether teachers’ length of service is associated with the implementation and perceived impact of inclusive ICT practices. The investigation is based on a mixed-methods sequential design. A questionnaire was administered to a sample of 292 teachers to collect data on their practices and perceptions. Subsequently, a focus group with eight teachers was conducted to further explore the meaning of their practices. Quantitative results show that the implementation and positive evaluation of inclusive ICT practices correlate significantly with teachers’ seniority within the system, which suggests that the model is formative in itself. Qualitative analysis shows that ICTs are not an end in themselves within the ABS, but an empowering tool for the students. The “Audiovisual Media Room”, managed by students, functions as a space for social and creative production that gives technology a pedagogical purpose. The study concludes that the sustainability of digital innovation requires coherence with the pedagogical project. Findings offer valuable implications for the design of teacher training contexts that foster the integration of technology within a framework of truly inclusive education. Full article
(This article belongs to the Special Issue Sustainable Digital Education: Innovations in Teaching and Learning)
Show Figures

Figure 1

20 pages, 5203 KB  
Article
Musical Training and Perceptual History Shape Alpha Dynamics in Audiovisual Speech Integration
by Jihyun Lee, Ji-Hye Han and Hyo-Jeong Lee
Brain Sci. 2025, 15(12), 1258; https://doi.org/10.3390/brainsci15121258 - 24 Nov 2025
Viewed by 696
Abstract
Introduction: Speech perception relies on integrating auditory and visual information, shaped by both perceptual and cognitive factors. Musical training has been shown to affect multisensory processing, whereas cognitive processes, such as recalibration derived from a perceptual history, influence neural responses to upcoming sensory [...] Read more.
Introduction: Speech perception relies on integrating auditory and visual information, shaped by both perceptual and cognitive factors. Musical training has been shown to affect multisensory processing, whereas cognitive processes, such as recalibration derived from a perceptual history, influence neural responses to upcoming sensory inputs. To investigate these influences, we evaluated cortical activity associated with the McGurk illusion focusing specifically on how musical training and perceptual history affect multisensory speech perception. Methods: Musicians and age-matched nonmusicians participated in electroencephalogram experiments using a McGurk task. We analyzed five conditions on the basis of stimulus type and participants’ responses and quantified the rate of illusory percepts and cortical alpha power between groups using dynamic imaging of coherent sources. Results: No differences in McGurk susceptibility were detected between musicians and nonmusicians. Source-localized alpha, however, revealed group-specific patterns: musical training was associated with frontal alpha modulation during integration, a finding consistent with enhanced top-down control, whereas nonmusicians relied more on sensory-driven processing. Additionally, illusory responses occurred in auditory-only trials. Follow-up analyses revealed no significant alpha modulation clusters in musicians, but temporal alpha modulations in nonmusicians depending on preceding audiovisual congruency. Conclusions: These findings suggest that musical training may influence the neural mechanisms of audiovisual integration during speech perception. Specifically, musicians appear to employ enhanced top-down control involving frontal regions, whereas nonmusicians rely more on sensory-driven processing mediated by parietal and temporal regions. Furthermore, perceptual recalibration may be more prominent in nonmusicians, whereas musicians appear to focus more on current sensory input, reducing their reliance on perceptual history. Full article
(This article belongs to the Special Issue Plasticity of Sensory Cortices: From Basic to Clinical Research)
Show Figures

Figure 1

20 pages, 443 KB  
Article
Disinformation in Crisis Contexts—Perception of Russia Today’s Narratives in Ecuador
by Abel Suing
Journal. Media 2025, 6(4), 192; https://doi.org/10.3390/journalmedia6040192 - 15 Nov 2025
Viewed by 2944
Abstract
Disinformation poses a substantive challenge to democratic governance, particularly in contexts marked by foreign influence. While the broadcasting of Russia Today (RT) in Europe has received significant attention, comparatively little is known about its impact and audience reception in Latin America. This study [...] Read more.
Disinformation poses a substantive challenge to democratic governance, particularly in contexts marked by foreign influence. While the broadcasting of Russia Today (RT) in Europe has received significant attention, comparatively little is known about its impact and audience reception in Latin America. This study addresses this gap by analysing Ecuadorians’ perceptions and uptake of RT’s broadcast narratives during a period of acute economic and security crisis. The objectives are (1) to establish the news narratives presented on RT, (2) to identify citizens’ perceptions of the news narratives, and (3) to determine the uptake of the narratives. A mixed methodological approach is undertaken, including narrative analysis of three audiovisual news pieces published by RT in Spanish, a survey, and three online focus groups. The results reveal the deployment of sophisticated narrative strategies that mix information with unsubstantiated claims and emotional appeals, resulting in a discernible bias favouring Russian perspectives. The findings underscore the urgency of strengthening media literacy and public policy responses in Latin America to counter the internalisation of such narratives. In addition, the research contributes to debates on information security, democratic resilience, and the protection of public opinion in vulnerable environments. Full article
(This article belongs to the Special Issue Social Media in Disinformation Studies)
Show Figures

Figure 1

17 pages, 972 KB  
Article
Audiovisual Integration Enhances Customer Perception of Artisanal Bread Sounds
by Tianyi Zhang, Maciej Chmara and Charles Spence
Foods 2025, 14(21), 3714; https://doi.org/10.3390/foods14213714 - 30 Oct 2025
Viewed by 919
Abstract
Auditory cues are an important, though often overlooked, component of the multisensory experience of food consumption, directly influencing consumer perception and enjoyment. This study investigates how prior food-related experiences affect the perception and preference for food sounds, with a focus on artisanal bread, [...] Read more.
Auditory cues are an important, though often overlooked, component of the multisensory experience of food consumption, directly influencing consumer perception and enjoyment. This study investigates how prior food-related experiences affect the perception and preference for food sounds, with a focus on artisanal bread, a popular staple food with distinctive auditory characteristics. A group of 113 participants was recruited and assigned to one of the two groups: 53 attended a bread-making workshop to establish enriched audiovisual associations, while 60 watched bread-making videos online, which represented a comparatively limited form of sensory engagement. Participants rated their perceived comfort levels for three distinct bread-related food sounds before and after the intervention. Sound recognition performance was also assessed as well as the appeal of the sounds. The results revealed that those who attended the workshop evaluated the close-up food sounds significantly more positively than those who watched the videos instead. Furthermore, regression analyses revealed that greater visual involvement during the workshop/watching videos was associated with increased comfort and decreased annoyance for the close-up bread sounds. These findings underscore the importance of multisensory integration experiences, particularly audiovisual integration, in shaping consumer responses and preferences for food sounds. To make sure that consumers feel comfortable and even hungry when they listen to food-related audial content, it is beneficial to incorporate familiar food sounds and, where possible, reinforce these with visual or experiential cues. Content that leverages multisensory associations and aligns with listeners’ prior experiences is likely to be more effective in eliciting positive sensory and emotional responses. Full article
(This article belongs to the Section Sensory and Consumer Sciences)
Show Figures

Figure 1

Back to TopTop