Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = soundtrack

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 18381 KiB  
Article
Sound and Perception in Ridley Scott’s Blade Runner (1982)
by Audrey Scotto le Massese
Arts 2024, 13(5), 154; https://doi.org/10.3390/arts13050154 - 5 Oct 2024
Viewed by 3409
Abstract
This paper discusses the renewal of the conception of film sound and music following the technological advances of the late 1970s. It analyses the ways in which film sound and music freed themselves from traditional uses and became elements to be designed creatively. [...] Read more.
This paper discusses the renewal of the conception of film sound and music following the technological advances of the late 1970s. It analyses the ways in which film sound and music freed themselves from traditional uses and became elements to be designed creatively. The soundtrack composed by Vangelis for Blade Runner (1982) is exceptional in this regard: produced in parallel to the editing of the film, it forged an intimate connection between sound and image. Through the method of reduced listening put forward by Michel Chion in Audio-Vision (2019), this paper scrutinizes the specific ways in which sound shapes the perception of the image and narrative in Blade Runner. The first part of this paper analyses how sounds come to replace music to characterize moods and atmospheres. Ambient sounds create a concrete, sonically dense diegetic world, while music is associated with an abstract, extra-diegetic world where spectators are designated judges. This contrast is thematically relevant and delineates the struggle between humans and replicants; sound and music are used for their metaphorical implications rather than in an effort for realism. The second part discusses the agency of characters through the sonorousness of their voices and bodies. Intonations, pronunciation, and acousmatic sounds anchor characters’ natures as humans or replicants to their bodies. Yet, these bodies are revealed to be mere vessels awaiting definition; in the third part, we explore how sound is used to craft synaesthetic depictions of characters, revealing their existence beyond the human/replicant divide. Full article
(This article belongs to the Special Issue Film Music)
Show Figures

Figure 1

20 pages, 605 KiB  
Article
Camera-Sourced Heart Rate Synchronicity: A Measure of Immersion in Audiovisual Experiences
by Joseph Williams, Jon Francombe and Damian Murphy
Appl. Sci. 2024, 14(16), 7228; https://doi.org/10.3390/app14167228 - 16 Aug 2024
Viewed by 1386
Abstract
Audio presentation is often attributed as being capable of influencing a viewer’s feeling of immersion during an audiovisual experience. However, there is limited empirical research supporting this claim. This study aimed to explore this effect by presenting a clip renowned for its immersive [...] Read more.
Audio presentation is often attributed as being capable of influencing a viewer’s feeling of immersion during an audiovisual experience. However, there is limited empirical research supporting this claim. This study aimed to explore this effect by presenting a clip renowned for its immersive soundtrack to two groups of participants with either high-end or basic audio presentation. To measure immersion, a novel method is applied, which utilises a camera instead of an electroencephalogram (ECG) for acquiring a heart rate synchronisation feature. The results of the study showed no difference in the feature, or in the responses to an established immersion questionnaire, between the two groups of participants. However, the camera-sourced HR synchronicity feature correlated with the results of the immersion questionnaire. Moreover, the camera-sourced HR synchronicity feature was found to correlate with an equivalent feature sourced from synchronously recorded ECG data. Hence, this shows the viability of using a camera instead of an ECG sensor to quantify heart rate synchronisation but suggests that audio presentation alone is not capable of eliciting a measurable difference in the feeling of immersion in this context. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

24 pages, 4196 KiB  
Article
Integrated STEAM Education for Students’ Creativity Development
by Josina Filipe, Mónica Baptista and Teresa Conceição
Educ. Sci. 2024, 14(6), 676; https://doi.org/10.3390/educsci14060676 - 20 Jun 2024
Cited by 5 | Viewed by 4150
Abstract
This study aims to explore how a learning sequence designed with an Integrated STEAM Education perspective (iSTEAM) contributes to students’ levels of creativity. The participants in this study were students from 9th and 10th grade with ages between 14 and 16 years old. [...] Read more.
This study aims to explore how a learning sequence designed with an Integrated STEAM Education perspective (iSTEAM) contributes to students’ levels of creativity. The participants in this study were students from 9th and 10th grade with ages between 14 and 16 years old. Students were challenged to produce a soundtrack for an animation video. This was achieved by building artifacts and using the phenomena of physics under study (mechanical energy) to produce sound effects. These were later digitally recorded and assembled to build the video’s soundtrack. This research work contributes to addressing the importance of STEAM education integration and of digital competence in developing students’ creativity in problem solving. Full article
(This article belongs to the Special Issue STEAM Education and Digital Competencies)
Show Figures

Figure 1

15 pages, 306 KiB  
Article
Songlines Are for Singing: Un/Mapping the Lived Spaces of Travelling Memory
by Les Roberts
Humanities 2023, 12(3), 52; https://doi.org/10.3390/h12030052 - 16 Jun 2023
Cited by 1 | Viewed by 3080
Abstract
Putting to work the dialectical concept of ‘un/mapping’, this paper examines the immateriality of cultural memory as coalescent in and around songlines: spatial stories woven from the autobiogeographical braiding of music and memory. Borrowing from Erll’s concept of ‘travelling memory’ (2011), the [...] Read more.
Putting to work the dialectical concept of ‘un/mapping’, this paper examines the immateriality of cultural memory as coalescent in and around songlines: spatial stories woven from the autobiogeographical braiding of music and memory. Borrowing from Erll’s concept of ‘travelling memory’ (2011), the idea of songlines provides a performative framework with which to both travel with music memory and to map/unmap the travelling of music memory. The theoretical focus of the work builds on empirical studies into music, place and cultural memory in the form of interviews conducted across the UK in 2010–2013. The interviews were designed to explore the way peoples’ musical pasts—memories of listening to music in the domestic home, for example, or attendance at concerts and festivals, music as soundtracks to journeys, holidays or everyday commutes to work or school, music at key rite of passage moments—have coloured and given shape to the narratives that structure a sense of embodied selfhood and social identity over time. Songlines, it is shown, tether the self to spaces and temporalities that map a tangled meshwork of lives lived spatially, where the ghosts of musical pasts are as vital and alive as the traveller who has invoked them. Analysis and discussion is centred around the following questions: How should the songlines of memory be mapped in ways that remain true and resonant with those whose spatial stories they tell? How, phenomenologically, can memory be rendered as an energy that remains creatively vital without running the risk of dissipating that energy by seeking to fix it in space and time (to memorialise it)? And if, as is advocated in the paper, we should not be in the business of mapping songlines, how do we go about the task of singing them? Pursuing these and other lines of enquiry, this paper explores a spatial anthropology of movement and travel in which the un/mapping of popular music memory mobilises phenomenological understandings of the entanglements of self, culture and embodied memory. Full article
(This article belongs to the Special Issue The Phenomenology of Travel and Tourism)
12 pages, 15123 KiB  
Article
You Look at Me Looking at You Looking at Me
by Brigitte Jurack
Arts 2023, 12(2), 73; https://doi.org/10.3390/arts12020073 - 4 Apr 2023
Cited by 1 | Viewed by 3641
Abstract
Living and working for a month at the Sanskriti Foundation in Delhi, the artist’s life was watched and observed by a group of resident monkeys. This paper is based on notes begun during that studio residency and represents the critical reflections emerging alongside [...] Read more.
Living and working for a month at the Sanskriti Foundation in Delhi, the artist’s life was watched and observed by a group of resident monkeys. This paper is based on notes begun during that studio residency and represents the critical reflections emerging alongside the hands-on sculptural practice. It is illustrated with close-up photographs of the artist’s sculpture that asks how encounters with fabled animals in densely populated 21st century urban areas can alter our understanding of the gaze as an inter-species gaze. The sculpture and paper begin to ask broader questions, including how can sculpture provide a different, and perhaps more tacit and empathetic, encounter with the other to enable a physical, mental or spiritual experience of cultural entanglement between the various onlookers? In how far is modelling the other’s gaze a form of embodiment and mimicry? Do the fast-changing camera angles and soundtracks of natural history programmes hinder an empathic inter-species encounter? Or, does the slow animation of the artist’s sculpted surface heighten a sense of being alongside equally curious, cunning and adaptable others such as crows, foxes and monkeys? Full article
(This article belongs to the Special Issue Art and Animals and the Ethical Position)
Show Figures

Figure 1

17 pages, 7576 KiB  
Article
Banging Interaction: A Ubimus-Design Strategy for the Musical Internet
by Damián Keller, Azeema Yaseen, Joseph Timoney, Sutirtha Chakraborty and Victor Lazzarini
Future Internet 2023, 15(4), 125; https://doi.org/10.3390/fi15040125 - 27 Mar 2023
Cited by 2 | Viewed by 2073
Abstract
We introduce a new perspective for musical interaction tailored to a specific class of sonic resources: impact sounds. Our work is informed by the field of ubiquitous music (ubimus) and engages with the demands of artistic practices. Through a series of deployments of [...] Read more.
We introduce a new perspective for musical interaction tailored to a specific class of sonic resources: impact sounds. Our work is informed by the field of ubiquitous music (ubimus) and engages with the demands of artistic practices. Through a series of deployments of a low-cost and highly flexible network-based prototype, the Dynamic Drum Collective, we exemplify the limitations and specific contributions of banging interaction. Three components of this new design strategy—adaptive interaction, mid-air techniques and timbre-led design—target the development of creative-action metaphors that make use of resources available in everyday settings. The techniques involving the use of sonic gridworks yielded positive outcomes. The subjects tended to choose sonic materials that—when combined with their actions on the prototype—approached a full rendition of the proposed soundtrack. The results of the study highlighted the subjects’ reliance on visual feedback as a non-exclusive strategy to handle both temporal organization and collaboration. The results show a methodological shift from device-centric and instrumental-centric methods to designs that target the dynamic relational properties of ubimus ecosystems. Full article
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia)
Show Figures

Figure 1

13 pages, 1581 KiB  
Article
Pop/Poetry: Dickinson as Remix
by Julia Leyda and Maria Sulimma
Arts 2023, 12(2), 62; https://doi.org/10.3390/arts12020062 - 22 Mar 2023
Cited by 2 | Viewed by 12532 | Correction
Abstract
In its meticulous, freewheeling adaptation of the life and work of celebrated poet Emily Dickinson, the television series Dickinson (Apple TV+, 2019–2021) manifests a twenty-first-century disruption of high and low culture afforded by digital media, including streaming video and music platforms. This article [...] Read more.
In its meticulous, freewheeling adaptation of the life and work of celebrated poet Emily Dickinson, the television series Dickinson (Apple TV+, 2019–2021) manifests a twenty-first-century disruption of high and low culture afforded by digital media, including streaming video and music platforms. This article argues that the fanciful series models a mixed-media, multimodal aesthetic form that invites a diverse range of viewers to find pleasure in Dickinson’s poetry itself and in the foibles of its author, regardless of their familiarity with the literary or cultural histories of the US American 19th century. Dickinson showcases creator Alena Smith’s well-researched knowledge of the poet and her work, while simultaneously mocking popular (mis)conceptions about her life and that of other literary figures such as Walt Whitman and Sylvia Plath, all set to a contemporary soundtrack. This analysis of Dickinson proposes to bring into conversation shifting boundaries of high and low culture across generations and engage with critical debates about the utility of the popular (and of studies of the popular) in literary and cultural studies in particular. Full article
(This article belongs to the Special Issue New Perspectives on Pop Culture)
Show Figures

Figure 1

13 pages, 3161 KiB  
Article
Music Emotion Recognition Based on a Neural Network with an Inception-GRU Residual Structure
by Xiao Han, Fuyang Chen and Junrong Ban
Electronics 2023, 12(4), 978; https://doi.org/10.3390/electronics12040978 - 15 Feb 2023
Cited by 28 | Viewed by 7173
Abstract
As a key field in music information retrieval, music emotion recognition is indeed a challenging task. To enhance the accuracy of music emotion classification and recognition, this paper uses the idea of inception structure to use different receptive fields to extract features of [...] Read more.
As a key field in music information retrieval, music emotion recognition is indeed a challenging task. To enhance the accuracy of music emotion classification and recognition, this paper uses the idea of inception structure to use different receptive fields to extract features of different dimensions and perform compression, expansion, and recompression operations to mine more effective features and connect the timing signals in the residual network to the GRU module to extract timing features. A one-dimensional (1D) residual Convolutional Neural Network (CNN) with an improved Inception module and Gate Recurrent Unit (GRU) was presented and tested on the Soundtrack dataset. Fast Fourier Transform (FFT) was used to process the samples experimentally and determine their spectral characteristics. Compared with the shallow learning methods such as support vector machine and random forest and the deep learning method based on Visual Geometry Group (VGG) CNN proposed by Sarkar et al., the proposed deep learning method of the 1D CNN with the Inception-GRU residual structure demonstrated better performance in music emotion recognition and classification tasks, achieving an accuracy of 84%. Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

15 pages, 613 KiB  
Article
Towards a Characterization of Background Music Audibility in Broadcasted TV
by Roser Batlle-Roca, Perfecto Herrera-Boyer, Blai Meléndez-Catalán, Emilio Molina and Xavier Serra
Int. J. Environ. Res. Public Health 2023, 20(1), 123; https://doi.org/10.3390/ijerph20010123 - 22 Dec 2022
Cited by 1 | Viewed by 3528
Abstract
In audiovisual contexts, different conventions determine the level at which background music is mixed into the final program, and sometimes, the mix renders the music to be practically or totally inaudible. From a perceptual point of view, the audibility of music is subject [...] Read more.
In audiovisual contexts, different conventions determine the level at which background music is mixed into the final program, and sometimes, the mix renders the music to be practically or totally inaudible. From a perceptual point of view, the audibility of music is subject to auditory masking by other aural stimuli such as voice or additional sounds (e.g., applause, laughter, horns), and is also influenced by the visual content that accompanies the soundtrack, and by attentional and motivational factors. This situation is relevant to the music industry because, according to some copyright regulations, the non-audible background music must not generate any distribution rights, and the marginally audible background music must generate half of the standard value of audible music. In this study, we conduct two psychoacoustic experiments to identify several factors that influence background music perception, and their contribution to its variable audibility. Our experiments are based on auditory detection and chronometric tasks involving keyboard interactions with original TV content. From the collected data, we estimated a sound-to-music ratio range to define the audibility threshold limits of the barely audible class. In addition, results show that perception is affected by loudness level, listening condition, music sensitivity, and type of television content. Full article
Show Figures

Figure 1

18 pages, 3158 KiB  
Article
Creating Audio Object-Focused Acoustic Environments for Room-Scale Virtual Reality
by Constantin Popp and Damian T. Murphy
Appl. Sci. 2022, 12(14), 7306; https://doi.org/10.3390/app12147306 - 20 Jul 2022
Cited by 10 | Viewed by 5577
Abstract
Room-scale virtual reality (VR) affordance in movement and interactivity causes new challenges in creating virtual acoustic environments for VR experiences. Such environments are typically constructed from virtual interactive objects that are accompanied by an Ambisonic bed and an off-screen (“invisible”) music soundtrack, with [...] Read more.
Room-scale virtual reality (VR) affordance in movement and interactivity causes new challenges in creating virtual acoustic environments for VR experiences. Such environments are typically constructed from virtual interactive objects that are accompanied by an Ambisonic bed and an off-screen (“invisible”) music soundtrack, with the Ambisonic bed, music, and virtual acoustics describing the aural features of an area. This methodology can become problematic in room-scale VR as the player cannot approach or interact with such background sounds, contradicting the player’s motion aurally and limiting interactivity. Written from a sound designer’s perspective, the paper addresses these issues by proposing a musically inclusive novel methodology that reimagines an acoustic environment predominately using objects that are governed by multimodal rule-based systems and spatialized in six degrees of freedom using 3D binaural audio exclusively while minimizing the use of Ambisonic beds and non-diegetic music. This methodology is implemented using off-the-shelf, creator-oriented tools and methods and is evaluated through the development of a standalone, narrative, prototype room-scale VR experience. The experience’s target platform is a mobile, untethered VR system based on head-mounted displays, inside-out tracking, head-mounted loudspeakers or headphones, and hand-held controllers. The authors apply their methodology to the generation of ambiences based on sound-based music, sound effects, and virtual acoustics. The proposed methodology benefits the interactivity and spatial behavior of virtual acoustic environments but may be constrained by platform and project limitations. Full article
Show Figures

Figure 1

22 pages, 1279 KiB  
Article
A Music Playback Algorithm Based on Residual-Inception Blocks for Music Emotion Classification and Physiological Information
by Yi-Jr Liao, Wei-Chun Wang, Shanq-Jang Ruan, Yu-Hao Lee and Shih-Ching Chen
Sensors 2022, 22(3), 777; https://doi.org/10.3390/s22030777 - 20 Jan 2022
Cited by 8 | Viewed by 3357
Abstract
Music can generate a positive effect in runners’ performance and motivation. However, the practical implementation of music intervention during exercise is mostly absent from the literature. Therefore, this paper designs a playback sequence system for joggers by considering music emotion and physiological signals. [...] Read more.
Music can generate a positive effect in runners’ performance and motivation. However, the practical implementation of music intervention during exercise is mostly absent from the literature. Therefore, this paper designs a playback sequence system for joggers by considering music emotion and physiological signals. This playback sequence is implemented by a music selection module that combines artificial intelligence techniques with physiological data and emotional music. In order to make the system operate for a long time, this paper improves the model and selection music module to achieve lower energy consumption. The proposed model obtains fewer FLOPs and parameters by using logarithm scaled Mel-spectrogram as input features. The accuracy, computational complexity, trainable parameters, and inference time are evaluated on the Bi-modal, 4Q emotion, and Soundtrack datasets. The experimental results show that the proposed model is better than that of Sarkar et al. and achieves competitive performance on Bi-modal (84.91%), 4Q emotion (92.04%), and Soundtrack (87.24%) datasets. More specifically, the proposed model reduces the computational complexity and inference time while maintaining the classification accuracy, compared to other models. Moreover, the size of the proposed model for network training is small, which can be applied to mobiles and other devices with limited computing resources. This study designed the overall playback sequence system by considering the relationship between music emotion and physiological situation during exercise. The playback sequence system can be adopted directly during exercise to improve users’ exercise efficiency. Full article
Show Figures

Figure 1

19 pages, 5757 KiB  
Article
Music and Time Perception in Audiovisuals: Arousing Soundtracks Lead to Time Overestimation No Matter Their Emotional Valence
by Alessandro Ansani, Marco Marini, Luca Mallia and Isabella Poggi
Multimodal Technol. Interact. 2021, 5(11), 68; https://doi.org/10.3390/mti5110068 - 29 Oct 2021
Cited by 4 | Viewed by 8806
Abstract
One of the most tangible effects of music is its ability to alter our perception of time. Research on waiting times and time estimation of musical excerpts has attested its veritable effects. Nevertheless, there exist contrasting results regarding several musical features’ influence on [...] Read more.
One of the most tangible effects of music is its ability to alter our perception of time. Research on waiting times and time estimation of musical excerpts has attested its veritable effects. Nevertheless, there exist contrasting results regarding several musical features’ influence on time perception. When considering emotional valence and arousal, there is some evidence that positive affect music fosters time underestimation, whereas negative affect music leads to overestimation. Instead, contrasting results exist with regard to arousal. Furthermore, to the best of our knowledge, a systematic investigation has not yet been conducted within the audiovisual domain, wherein music might improve the interaction between the user and the audiovisual media by shaping the recipients’ time perception. Through the current between-subjects online experiment (n = 565), we sought to analyze the influence that four soundtracks (happy, relaxing, sad, scary), differing in valence and arousal, exerted on the time estimation of a short movie, as compared to a no-music condition. The results reveal that (1) the mere presence of music led to time overestimation as opposed to the absence of music, (2) the soundtracks that were perceived as more arousing (i.e., happy and scary) led to time overestimation. The findings are discussed in terms of psychological and phenomenological models of time perception. Full article
(This article belongs to the Special Issue Musical Interactions (Volume II))
Show Figures

Figure 1

19 pages, 291 KiB  
Article
Not Different Enough: Avoiding Representation as “Balkan” and the Constrained Appeal of Macedonian Ethno Music
by Dave Wilson
Arts 2020, 9(2), 45; https://doi.org/10.3390/arts9020045 - 30 Mar 2020
Cited by 3 | Viewed by 5182
Abstract
Since the early 1990s, interest in various forms of traditional music among middle-class urban ethnic Macedonians has grown. Known by some as the “Ethno Renaissance”, this trend initially grew in the context of educational ensembles in Skopje and gained momentum due to the [...] Read more.
Since the early 1990s, interest in various forms of traditional music among middle-class urban ethnic Macedonians has grown. Known by some as the “Ethno Renaissance”, this trend initially grew in the context of educational ensembles in Skopje and gained momentum due to the soundtrack of the internationally acclaimed Macedonian film Before the Rain (1994) and the formation of the group DD Synthesis by musician and pedagogue Dragan Dautovski. This article traces the development of this multifaceted musical practice, which became known as “ethno music” (etno muzika) and now typically features combinations of various traditional music styles with one another and with other musical styles. Ethno music articulates dynamic changes in Macedonian politics and wider global trends in the “world music” market, which valorizes musical hybridity as “authentic” and continues to prioritize performers perceived as exotic and different. This article discusses the rhetoric, representation, and musical styles of ethno music in the 1990s and in a second wave of “ethno bands” (etno bendovi) that began around 2005. Drawing on ethnography conducted between 2011 and 2018 and on experience as a musician performing and recording in Macedonia periodically since 2003, I argue that, while these bands and their multi-layered musical projects resonate with middle-class, cosmopolitan audiences in Macedonia and its diaspora, their avoidance of the term “Balkan” and associated stereotypes constrains their popularity to Macedonian audiences and prevents them from participating widely in world music festival networks and related markets. Full article
(This article belongs to the Special Issue Balkan Music: Past, Present, Future)
38 pages, 1782 KiB  
Article
Soundtracking the Public Space: Outcomes of the Musikiosk Soundscape Intervention
by Daniel Steele, Edda Bild, Cynthia Tarlao and Catherine Guastavino
Int. J. Environ. Res. Public Health 2019, 16(10), 1865; https://doi.org/10.3390/ijerph16101865 - 27 May 2019
Cited by 47 | Viewed by 8848
Abstract
Decades of research support the idea that striving for lower sound levels is the cornerstone of protecting urban public health. Growing insight on urban soundscapes, however, highlights a more complex role of sound in public spaces, mediated by context, and the potential of [...] Read more.
Decades of research support the idea that striving for lower sound levels is the cornerstone of protecting urban public health. Growing insight on urban soundscapes, however, highlights a more complex role of sound in public spaces, mediated by context, and the potential of soundscape interventions to contribute to the urban experience. We discuss Musikiosk, an unsupervised installation allowing users to play audio content from their own devices over publicly provided speakers. Deployed in the gazebo of a pocket park in Montreal (Parc du Portugal), in the summer of 2015, its effects over the quality of the public urban experience of park users were researched using a mixed methods approach, combining questionnaires, interviews, behavioral observations, and acoustic monitoring, as well as public outreach activities. An integrated analysis of results revealed positive outcomes both at the individual level (in terms of soundscape evaluations and mood benefits) and at the social level (in terms of increased interaction and lingering behaviors). The park was perceived as more pleasant and convivial for both users and non-users, and the perceived soundscape calmness and appropriateness were not affected. Musikiosk animated an underused section of the park without displacing existing users while promoting increased interaction and sharing, particularly of music. It also led to a strategy for interacting with both residents and city decision-makers on matters related to urban sound. Full article
Show Figures

Figure 1

10 pages, 2187 KiB  
Article
Influence of Soundtrack on Eye Movements During Video Exploration
by Antoine Coutrot, Nathalie Guyader, Gelu Ionescu and Alice Caplier
J. Eye Mov. Res. 2012, 5(4), 1-10; https://doi.org/10.16910/jemr.5.4.2 - 8 Aug 2012
Cited by 39 | Viewed by 113
Abstract
Models of visual attention rely on visual features such as orientation, intensity or motion to predict which regions of complex scenes attract the gaze of observers. So far, sound has never been considered as a possible feature that might influence eye movements. Here, [...] Read more.
Models of visual attention rely on visual features such as orientation, intensity or motion to predict which regions of complex scenes attract the gaze of observers. So far, sound has never been considered as a possible feature that might influence eye movements. Here, we evaluate the impact of non-spatial sound on the eye movements of observers watching videos. We recorded eye movements of 40 participants watching assorted videos with and without their related soundtracks. We found that sound impacts on eye position, fixation duration and saccade amplitude. The effect of sound is not constant across time but becomes significant around one second after the beginning of video shots. Full article
Show Figures

Figure 1

Back to TopTop