Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = ubiquitous music

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1643 KB  
Article
Towards Building a Unified Adsorption Model for Goethite Based on Variable Crystal Face Contributions: III Carbonate Adsorption
by Mario Villalobos and América Xitlalli Cruz-Valladares
Colloids Interfaces 2025, 9(4), 51; https://doi.org/10.3390/colloids9040051 - 18 Aug 2025
Cited by 1 | Viewed by 964
Abstract
Goethite, a ubiquitous Fe(III) oxyhydroxide mineral, typically occurs in very small particle sizes whose interfacial properties critically influence the fate and transport of ionic species in natural systems. The surface site density of synthetic goethite increases with particle size, resulting in enhanced adsorption [...] Read more.
Goethite, a ubiquitous Fe(III) oxyhydroxide mineral, typically occurs in very small particle sizes whose interfacial properties critically influence the fate and transport of ionic species in natural systems. The surface site density of synthetic goethite increases with particle size, resulting in enhanced adsorption capacity per unit area. In the first two parts of this study, we modeled the adsorption of protons, nitrate, As(V), Pb(II), Zn(II), and phosphate on goethite as a function of particle size, adsorbate concentration, pH, and ionic strength, using unified parameters within the CD-MUSIC framework. Here, we extend this work to characterize the interfacial behavior of carbonate in goethite suspensions, using a comprehensive dataset generated previously under both closed and open CO2 system conditions. Carbonate oxyanions, prevalent in geochemical environments, exhibit competitive and complexation interactions with other ions and mineral surfaces. Although a bidentate bridging surface carbonate complex has been successful in previous modeling efforts on goethite, we found that the size of the carbonate moiety is too small and would require extreme octahedron bending of the goethite’s singly coordinated sites to accommodate this type of binding. Here, we propose a novel complex configuration that considers structural, physicochemical, and spectroscopic evidence. Optimal unified affinity constants and charge distribution parameters for this complex simulated all experimental data successfully, providing further validation of the CD-MUSIC model for describing relevant goethite/aqueous interfacial reactions. Full article
(This article belongs to the Special Issue Ten Years Without Nikola Kallay)
Show Figures

Graphical abstract

24 pages, 896 KB  
Article
The Ubimus Plugging Framework: Deploying FPGA-Based Prototypes for Ubiquitous Music Hardware Design
by Damián Keller, Aman Jagwani and Victor Lazzarini
Computers 2025, 14(4), 155; https://doi.org/10.3390/computers14040155 - 21 Apr 2025
Viewed by 1744
Abstract
The emergent field of embedded computing presents a challenging scenario for ubiquitous music (ubimus) design. Available tools demand specific technical knowledge—as exemplified in the techniques involved in programming integrated circuits of configurable logic units, known as field-programmable gate arrays (FPGAs). Low-level hardware description [...] Read more.
The emergent field of embedded computing presents a challenging scenario for ubiquitous music (ubimus) design. Available tools demand specific technical knowledge—as exemplified in the techniques involved in programming integrated circuits of configurable logic units, known as field-programmable gate arrays (FPGAs). Low-level hardware description languages used for handling FPGAs involve a steep learning curve. Hence, FPGA programming offers a unique challenge to probe the boundaries of ubimus frameworks as enablers of fast and versatile prototyping. State-of-the-art hardware-oriented approaches point to the use of high-level synthesis as a promising programming technique. Furthermore, current FPGA system-on-chip (SoC) hardware with an associated onboard general-purpose processor may foster the development of flexible platforms for musical signal processing. Taking into account the emergence of an FPGA-based ecology of tools, we introduce the ubimus plugging framework. The procedures employed in the construction of a modular- synthesis library based on field-programmable gate array hardware, ModFPGA, are documented, and examples of musical projects applying key design principles are discussed. Full article
Show Figures

Figure 1

15 pages, 1317 KB  
Article
Noise Stress Abrogates Structure-Specific Endonucleases within the Mammalian Inner Ear
by O’neil W. Guthrie
Int. J. Mol. Sci. 2024, 25(3), 1749; https://doi.org/10.3390/ijms25031749 - 1 Feb 2024
Cited by 1 | Viewed by 1484
Abstract
Nucleotide excision repair (NER) is a multistep biochemical process that maintains the integrity of the genome. Unlike other mechanisms that maintain genomic integrity, NER is distinguished by two irreversible nucleolytic events that are executed by the xeroderma pigmentosum group G (XPG) and xeroderma [...] Read more.
Nucleotide excision repair (NER) is a multistep biochemical process that maintains the integrity of the genome. Unlike other mechanisms that maintain genomic integrity, NER is distinguished by two irreversible nucleolytic events that are executed by the xeroderma pigmentosum group G (XPG) and xeroderma pigmentosum group F (XPF) structure-specific endonucleases. Beyond nucleolysis, XPG and XPF regulate the overall efficiency of NER through various protein–protein interactions. The current experiments evaluated whether an environmental stressor could negatively affect the expression of Xpg (Ercc5: excision repair cross-complementing 5) or Xpf (Ercc4: excision repair cross-complementing 4) in the mammalian cochlea. Ubiquitous background noise was used as an environmental stressor. Gene expression levels for Xpg and Xpf were quantified from the cochlear neurosensory epithelium after noise exposure. Further, nonlinear cochlear signal processing was investigated as a functional consequence of changes in endonuclease expression levels. Exposure to stressful background noise abrogated the expression of both Xpg and Xpf, and these effects were associated with pathological nonlinear signal processing from receptor cells within the mammalian inner ear. Given that exposure to environmental sounds (noise, music, etc.) is ubiquitous in daily life, sound-induced limitations to structure-specific endonucleases might represent an overlooked genomic threat. Full article
Show Figures

Figure 1

24 pages, 2723 KB  
Study Protocol
Feeling Connected: The Role of Haptic Feedback in VR Concerts and the Impact of Haptic Music Players on the Music Listening Experience
by Tara Venkatesan and Qian Janice Wang
Arts 2023, 12(4), 148; https://doi.org/10.3390/arts12040148 - 10 Jul 2023
Cited by 10 | Viewed by 10981
Abstract
Today, some of the most widely attended concerts are in virtual reality (VR). For example, the videogame Fortnite recently attracted 12.3 million viewers sitting in homes all over the world to a VR Travis Scott rap concert. As such VR concerts become increasingly [...] Read more.
Today, some of the most widely attended concerts are in virtual reality (VR). For example, the videogame Fortnite recently attracted 12.3 million viewers sitting in homes all over the world to a VR Travis Scott rap concert. As such VR concerts become increasingly ubiquitous, we are presented with an opportunity to design more immersive virtual experiences by augmenting VR with other multisensory technologies. Given that sound is a multi-modal phenomenon that can be experienced sonically and vibrationally, we investigated the importance of haptic feedback to musical experiences using a combination of qualitative and empirical methodologies. Study 1 was a qualitative study demonstrating that, unlike their live counterparts, current VR concerts make it harder for audiences to form a connection with artists and their music. Furthermore, VR concerts lack multisensory feedback and are perceived as less authentic than live concert experiences. Participants also identified a variety of different kinds of touch that they receive at live concerts and suggested that ideal VR concerts would replicate physical touch and thermal feedback from the audience, emotional touch, and vibrations from the music. Specifically, users advocated for the use of haptic devices to increase the immersiveness of VR concert experiences. Study 2 isolated the role of touch in the music listening experience and empirically investigated the impact of haptic music players (HMPs) on the audio-only listening experience. An empirical, between-subjects study was run with participants either receiving vibrotactile feedback via an HMP (haptics condition) or no vibrotactile feedback (control) while listening to music. Results indicated that listening to music while receiving vibrotactile feedback increased participants’ sense of empathy, parasocial bond, and loyalty towards the artist, while also decreasing participants’ feelings of loneliness. The connection between haptics condition and these dependent variables was mediated by the feeling of social presence. Study 2 thus provides initial evidence that HMPs may be used to meet people’s need for connection, multisensory immersion, and complex forms of touch in VR concerts as identified in Study 1. Full article
(This article belongs to the Special Issue Feeling the Future—Haptic Audio)
Show Figures

Figure 1

17 pages, 7576 KB  
Article
Banging Interaction: A Ubimus-Design Strategy for the Musical Internet
by Damián Keller, Azeema Yaseen, Joseph Timoney, Sutirtha Chakraborty and Victor Lazzarini
Future Internet 2023, 15(4), 125; https://doi.org/10.3390/fi15040125 - 27 Mar 2023
Cited by 2 | Viewed by 2572
Abstract
We introduce a new perspective for musical interaction tailored to a specific class of sonic resources: impact sounds. Our work is informed by the field of ubiquitous music (ubimus) and engages with the demands of artistic practices. Through a series of deployments of [...] Read more.
We introduce a new perspective for musical interaction tailored to a specific class of sonic resources: impact sounds. Our work is informed by the field of ubiquitous music (ubimus) and engages with the demands of artistic practices. Through a series of deployments of a low-cost and highly flexible network-based prototype, the Dynamic Drum Collective, we exemplify the limitations and specific contributions of banging interaction. Three components of this new design strategy—adaptive interaction, mid-air techniques and timbre-led design—target the development of creative-action metaphors that make use of resources available in everyday settings. The techniques involving the use of sonic gridworks yielded positive outcomes. The subjects tended to choose sonic materials that—when combined with their actions on the prototype—approached a full rendition of the proposed soundtrack. The results of the study highlighted the subjects’ reliance on visual feedback as a non-exclusive strategy to handle both temporal organization and collaboration. The results show a methodological shift from device-centric and instrumental-centric methods to designs that target the dynamic relational properties of ubimus ecosystems. Full article
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia)
Show Figures

Figure 1

16 pages, 853 KB  
Article
Effects of Live Music on the Perception of Noise in the SICU/PICU: A Patient, Caregiver, and Medical Staff Environmental Study
by Andrew Rossetti, Joanne Loewy, Wen Chang-Lit, Nienke H. van Dokkum, Erik Baumann, Gabrielle Bouissou, John Mondanaro, Todd O’Connor, Gabriela Asch-Ortiz and Hayato Mitaka
Int. J. Environ. Res. Public Health 2023, 20(4), 3499; https://doi.org/10.3390/ijerph20043499 - 16 Feb 2023
Cited by 9 | Viewed by 4533
Abstract
Intensive Care Units (ICUs) require a multidisciplinary team that consists of, but is not limited to, intensivists (clinicians who specialize in critical illness care), pharmacists and nurses, respiratory care therapists, and other medical consultants from a broad range of specialties. The complex and [...] Read more.
Intensive Care Units (ICUs) require a multidisciplinary team that consists of, but is not limited to, intensivists (clinicians who specialize in critical illness care), pharmacists and nurses, respiratory care therapists, and other medical consultants from a broad range of specialties. The complex and demanding critical care environment provides few opportunities for patients and personal and professional caregivers to evaluate how sound effects them. A growing body of literature attests to noise’s adverse influence on patients’ sleep, and high sound levels are a source of staff stress, as noise is an ubiquitous and noxious stimuli. Vulnerable patients have a low threshold tolerance to audio-induced stress. Despite these indications, peak sound levels often register as high, as can ventilators, and the documented noise levels in hospitals continue to rise. This baseline study, carried out in two hospitals’ Surgical and Pediatric Intensive Care Units, measured the effects of live music on the perception of noise through surveying patients, personal caregivers and staff in randomized conditions of no music, and music as provided by music therapists through our hospital system’s environmental music therapy program. Full article
(This article belongs to the Special Issue Music for Health Care and Well-Being)
Show Figures

Figure 1

11 pages, 460 KB  
Article
Hybrid Direction of Arrival Precoding for Multiple Unmanned Aerial Vehicles Aided Non-Orthogonal Multiple Access in 6G Networks
by Laura Pierucci
Appl. Sci. 2022, 12(2), 895; https://doi.org/10.3390/app12020895 - 16 Jan 2022
Cited by 4 | Viewed by 2596
Abstract
Unmanned aerial vehicles (UAV) have attracted increasing attention in acting as a relay for effectively improving the coverage and data rate of wireless systems, and according to this vision, they will be integrated in the future sixth generation (6G) cellular network. Non-orthogonal multiple [...] Read more.
Unmanned aerial vehicles (UAV) have attracted increasing attention in acting as a relay for effectively improving the coverage and data rate of wireless systems, and according to this vision, they will be integrated in the future sixth generation (6G) cellular network. Non-orthogonal multiple access (NOMA) and mmWave band are planned to support ubiquitous connectivity towards a massive number of users in the 6G and Internet of Things (IOT) contexts. Unfortunately, the wireless terrestrial link between the end-users and the base station (BS) can suffer severe blockage conditions. Instead, UAV relaying can establish a line-of-sight (LoS) connection with high probability due to its flying height. The present paper focuses on a multi-UAV network which supports an uplink (UL) NOMA cellular system. In particular, by operating in the mmWave band, hybrid beamforming architecture is adopted. The MUltiple SIgnal Classification (MUSIC) spectral estimation method is considered at the hybrid beamforming to detect the different direction of arrival (DoA) of each UAV. We newly design the sum-rate maximization problem of the UAV-aided NOMA 6G network specifically for the uplink mmWave transmission. Numerical results point out the better behavior obtained by the use of UAV relays and the MUSIC DoA estimation in the Hybrid mmWave beamforming in terms of achievable sum-rate in comparison to UL NOMA connections without the help of a UAV network. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles)
Show Figures

Figure 1

35 pages, 535 KB  
Article
Musical Control Gestures in Mobile Handheld Devices: Design Guidelines Informed by Daily User Experience
by Alexandre Clément, Luciano Moreira, Miriam Rosa and Gilberto Bernardes
Multimodal Technol. Interact. 2021, 5(7), 32; https://doi.org/10.3390/mti5070032 - 27 Jun 2021
Cited by 3 | Viewed by 5427
Abstract
Mobile handheld devices, such as smartphones and tablets, have become some of the most prominent ubiquitous terminals within the information and communication technology landscape. Their transformative power within the digital music domain changed the music ecosystem from production to distribution and consumption. Of [...] Read more.
Mobile handheld devices, such as smartphones and tablets, have become some of the most prominent ubiquitous terminals within the information and communication technology landscape. Their transformative power within the digital music domain changed the music ecosystem from production to distribution and consumption. Of interest here is the ever-expanding number of mobile music applications. Despite their growing popularity, their design in terms of interaction perception and control is highly arbitrary. It remains poorly addressed in related literature and lacks a clear, systematized approach. In this context, our paper aims to provide the first steps towards defining guidelines for optimal sonic interaction design practices in mobile music applications. Our design approach is informed by user data in appropriating mobile handheld devices. We conducted an experiment to learn links between control gestures and musical parameters, such as pitch, duration, and amplitude. A twofold action—reflection protocol and tool-set for evaluating the aforementioned links—are also proposed. The results collected from the experiment show statistically significant trends in pitch and duration control gesture mappings. On the other hand, amplitude appears to elicit a more diverse mapping approach, showing no definitive trend in this experiment. Full article
(This article belongs to the Special Issue Musical Interactions)
Show Figures

Figure 1

24 pages, 5404 KB  
Article
A Linear Oscillator Model Predicts Dynamic Temporal Attention and Pupillary Entrainment to Rhythmic Patterns
by Lauren K. Fink, Brian K. Hurley, Joy J. Geng and Petr Janata
J. Eye Mov. Res. 2018, 11(2), 1-24; https://doi.org/10.16910/jemr.11.2.12 - 20 Nov 2018
Cited by 27 | Viewed by 686
Abstract
Rhythm is a ubiquitous feature of music that induces specific neural modes of processing. In this paper, we assess the potential of a stimulus-driven linear oscillator model (57) to predict dynamic attention to complex musical rhythms on an instant-by-instant basis. We [...] Read more.
Rhythm is a ubiquitous feature of music that induces specific neural modes of processing. In this paper, we assess the potential of a stimulus-driven linear oscillator model (57) to predict dynamic attention to complex musical rhythms on an instant-by-instant basis. We use perceptual thresholds and pupillometry as attentional indices against which to test our model predictions. During a deviance detection task, participants listened to continuously looping, multiinstrument, rhythmic patterns, while being eye-tracked. Their task was to respond anytime they heard an increase in intensity (dB SPL). An adaptive thresholding algorithm adjusted deviant intensity at multiple probed temporal locations throughout each rhythmic stimulus. The oscillator model predicted participants’ perceptual thresholds for detecting deviants at probed locations, with a low temporal salience prediction corresponding to a high perceptual threshold and vice versa. A pupil dilation response was observed for all deviants. Notably, the pupil dilated even when participants did not report hearing a deviant. Maximum pupil size and resonator model output were significant predictors of whether a deviant was detected or missed on any given trial. Besides the evoked pupillary response to deviants, we also assessed the continuous pupillary signal to the rhythmic patterns. The pupil exhibited entrainment at prominent periodicities present in the stimuli and followed each of the different rhythmic patterns in a unique way. Overall, these results replicate previous studies using the linear oscillator model to predict dynamic attention to complex auditory scenes and extend the utility of the model to the prediction of neurophysiological signals, in this case the pupillary time course; however, we note that the amplitude envelope of the acoustic patterns may serve as a similarly useful predictor. To our knowledge, this is the first paper to show entrainment of pupil dynamics by demonstrating a phase relationship between musical stimuli and the pupillary signal. Full article
Show Figures

Figure 1

17 pages, 736 KB  
Article
The Impact of Music and Stretched Time on Pupillary Responses and Eye Movements in Slow-Motion Film Scenes
by David Hammerschmidt and Clemens Wöllner
J. Eye Mov. Res. 2018, 11(2), 1-17; https://doi.org/10.16910/jemr.11.2.10 - 20 May 2018
Cited by 15 | Viewed by 715
Abstract
This study investigated the effects of music and playback speed on arousal and visual perception in slow-motion scenes taken from commercial films. Slow-motion scenes are a ubiquitous film technique and highly popular. Yet the psychological effects of mediated time-stretching compared to real-time motion [...] Read more.
This study investigated the effects of music and playback speed on arousal and visual perception in slow-motion scenes taken from commercial films. Slow-motion scenes are a ubiquitous film technique and highly popular. Yet the psychological effects of mediated time-stretching compared to real-time motion have not been empirically investigated. We hypothesised that music affects arousal and attentional processes. Furthermore, we as-sumed that playback speed influences viewers’ visual perception, resulting in a higher number of eye movements and larger gaze dispersion. Thirty-nine participants watched three film excerpts in a repeated-measures design in conditions with or without music and in slow motion vs. adapted real-time motion (both visual-only). Results show that music in slow-motion film scenes leads to higher arousal compared to no music as indicated by larger pupil diameters in the former. There was no systematic effect of music on visual perception in terms of eye movements. Playback speed influenced visual perception in eye movement parameters such that slow motion resulted in more and shorter fixations as well as more saccades compared to adapted real-time motion. Furthermore, in slow motion there was a higher gaze dispersion and a smaller centre bias, indicating that individuals attended to more detail in slow motion scenes. Full article
Show Figures

Figure 1

45 pages, 1425 KB  
Review
Brain. Conscious and Unconscious Mechanisms of Cognition, Emotions, and Language
by Leonid Perlovsky and Roman Ilin
Brain Sci. 2012, 2(4), 790-834; https://doi.org/10.3390/brainsci2040790 - 18 Dec 2012
Cited by 19 | Viewed by 14237
Abstract
Conscious and unconscious brain mechanisms, including cognition, emotions and language are considered in this review. The fundamental mechanisms of cognition include interactions between bottom-up and top-down signals. The modeling of these interactions since the 1960s is briefly reviewed, analyzing the ubiquitous difficulty: incomputable [...] Read more.
Conscious and unconscious brain mechanisms, including cognition, emotions and language are considered in this review. The fundamental mechanisms of cognition include interactions between bottom-up and top-down signals. The modeling of these interactions since the 1960s is briefly reviewed, analyzing the ubiquitous difficulty: incomputable combinatorial complexity (CC). Fundamental reasons for CC are related to the Gödel’s difficulties of logic, a most fundamental mathematical result of the 20th century. Many scientists still “believed” in logic because, as the review discusses, logic is related to consciousness; non-logical processes in the brain are unconscious. CC difficulty is overcome in the brain by processes “from vague-unconscious to crisp-conscious” (representations, plans, models, concepts). These processes are modeled by dynamic logic, evolving from vague and unconscious representations toward crisp and conscious thoughts. We discuss experimental proofs and relate dynamic logic to simulators of the perceptual symbol system. “From vague to crisp” explains interactions between cognition and language. Language is mostly conscious, whereas cognition is only rarely so; this clarifies much about the mind that might seem mysterious. All of the above involve emotions of a special kind, aesthetic emotions related to knowledge and to cognitive dissonances. Cognition-language-emotional mechanisms operate throughout the hierarchy of the mind and create all higher mental abilities. The review discusses cognitive functions of the beautiful, sublime, music. Full article
Show Figures

Figure 1

Back to TopTop