Previous Article in Journal
Кoнец фильма: Ruins, Remnants, and Remains of the USSR Army in Borne Sulinowo as an Inspiration for Performance Artists
Previous Article in Special Issue
Perspectives on Generative Sound Design: A Generative Soundscapes Showcase
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Analysis of Audio Information Streaming in Georg Philipp Telemann’s Sonata in C Major for Recorder and Basso Continuo, Allegro (TWV 41:C2)

by
Adam Rosiński
Faculty of Arts, University of Warmia and Mazury in Olsztyn, 10-007 Olsztyn, Poland
Arts 2025, 14(4), 76; https://doi.org/10.3390/arts14040076 (registering DOI)
Submission received: 19 April 2025 / Revised: 3 July 2025 / Accepted: 8 July 2025 / Published: 14 July 2025
(This article belongs to the Special Issue Sound, Space, and Creativity in Performing Arts)

Abstract

This paper presents an analysis of G. P. Telemann’s Sonata in C Major for Recorder and Basso Continuo (TWV 41:C2, Allegro), with the aim of investigating the occurrence of perceptual streams. The presence of perceptual streams in musical works helps to organise the sound stimuli received by the listener in a specific manner. This enables each listener to perceive the piece in an individual and distinctive manner, granting primacy to selected sounds over others. Directing the listener’s attention to particular elements of the auditory image leads to the formation of specific mental representations. This, in turn, results in distinctive interpretations of the acoustic stimuli. All of these processes are explored and illustrated in this analysis.

1. Introduction

Sound perception encompasses the notion that two distinct processes occur within the human auditory system: sensory and cognitive processes. The group of sensory processes includes the following phenomena:
  • Receiving sound stimuli from the environment;
  • Converting incoming sounds into neural impulses within the hearing organ;
  • Encoding the physical characteristics of the received acoustic waves;
  • Transmitting neural impulses to the auditory centres located in the cerebral cortex.
The activity of the sensory processes involved in hearing is associated with the physiological state of the human auditory system. In studies within the field of psychoacoustics, which aim to explore the correlations between the physical characteristics of sound and auditory perception, it is assumed that among individuals with otologically normal hearing, the mechanisms underlying these phenomena do not differ significantly.
On the other hand, cognitive processes involve the mental processing of information received from the hearing organ and other senses. This processing takes place within the central nervous system and is based on the reception of signals from the external environment, their memorisation, transformation, and subsequent re-introduction into the environment as behaviour. Cognitive processes include, among others, attention, awareness, perception, memory, thinking, and reasoning (Nęcka et al. 2006). The cognitive construct formed as a result of these processes serves to build a mental image of the received stimuli. Cognitive processes depend on various factors related to an individual’s past experiences with sensory stimuli, as a result of which each person may be characterised by a different sensitivity when exposed to the same auditory material. Consequently, as a result of auditory processes, diverse perceptions of structures and other information may arise in the listener’s mind. An individual receiving an acoustic wave can typically distinguish the sound source, its material, and the acoustic characteristics of the environment, as well as classify sounds by features such as loudness, pitch, timbre, perceived duration, and spatial location.

2. Terminology Basics

In this article, the subject of cognition is understood as the interpretation of acoustic stimuli received by the listener and grouped in a particular manner, forming perceptual streams. Taking into account studies on perceptual stream analysis, this issue can be transposed into a musical context, aiming to capture moments that are significant for both science and music and to confront theoretical assumptions with explicit examples drawn from musical literature. The analysis presented here constitutes an attempt to find evidence of perceptual streams occurring in entire musical pieces, not only in selected short fragments or in artificial test sequences created in laboratory conditions.
Components present in the auditory scene of a given musical piece are distinguished based on sonic differences identified by the listener, which directly influence the reception and interpretation of the music. Perceptual streams formed in this way—whether during a live concert or while listening to a recording—leave a distinctive interpretative mark on the listener’s mind. Depending on how the listener apprehends the sound structures, various shapes and figures may be formed mentally. The principal factors influencing the formation of different perceptual shapes and figures are as follows:
  • The listener’s attention and focus;
  • The particular musical information on which the listener is focused at any given moment;
  • The division of perceived sounds into two or more layers, such as the main melody and harmony (i.e., accompanying or background sounds);
  • The presence of masking sounds that may distract the listener’s attention, potentially causing the main figure in the music to become background or foreground;
  • Shifts of attention towards different perceptual elements in the music, whereby similarity between features may result in the listener’s focus switching and the formation of entirely new perceptual shapes.
The concept of auditory scene analysis was developed through the study of correlations between the acoustic features of sounds and the rules governing the grouping of sounds into common perceptual streams. Among the mechanisms underlying such grouping are the following psychological constructs: similarity, proximity, common fate, closure, and good continuation (Bregman 1990; Shamma and Micheyl 2010; Shamma et al. 2011; Winkler et al. 2009).
Streaming is the process of isolating sounds within a set, which reach the listener from smaller subsets (sequences) and are perceived as holistic perceptual structures, or streams. As a result of this process, one or more perceptual streams can be created mentally. The essence of streaming lies in the existence of a defined system and perceptual rules, which lead to the formation of a sound stream (Bregman 2004; Micheyl et al. 2005; Snyder et al. 2008). Auditory streaming processes can occur both during the perception of musical works and in other sound structures (e.g., speech). Such streaming is associated with the selective function of the listener’s attention, which can be directed in various ways depending on the context or task (Steiger and Bregman 1982).
A perceptual stream refers to a sequence of sounds or a group of auditory impressions arranged in such a way that the listener perceives the given acoustic event as a coherent whole (Bregman 1990). This means that the listener can, for example, recognise and distinguish them as voices in polyphonic musical pieces, or as harmonic structures—differentiating between distinct perceptual streams—thanks to the ability to identify regularities, coherent elements, and features with similar distinguishing characteristics (Moore and Gockel 2012; Snyder et al. 2008). The essence of a perceptual stream lies in extracting specific features from the sounds within a larger structure in order to obtain a holistic perspective or a complete auditory description of a given acoustic event (Bregman et al. 1999).

3. Equipment Used During the Listening Sessions

The listening sessions were conducted in a private recording studio owned by the author. The reverberation time was measured using the intermittent noise method, with the result for each measurement point representing the average value from seven pauses. The measurements were carried out with an NTi XL2 acoustic analyser operating in T20 mode, with a measuring range up to 130 dB. The sound source used for the measurements was a hexahedral loudspeaker system. The test was performed for four different sound source placements at four control measurement points. The control room of the recording studio, with a volume of approximately 40 m3, is characterised by a medium reverberation time—values for selected sound frequencies are presented in Table 1.
The listening station used by the author comprised the following components:
  • A desktop computer with a Windows 10 64-bit operating system. The computer was assembled from the following parts:
    Processor: AMD Ryzen Threadripper 2990WX, 32 × 3.0 GHz, 64 threads;
    Motherboard: AsRock X399 Taichi;
    RAM: G.Skill TridentZ, 64 GB, DDR4, 3200 MHz, CL 14;
    Power supply: be quiet! Power Zone, 1000 W;
    Graphics card: Sapphire Radeon RX 590 NITRO+, 8 GB, GDDR5.
  • A pair of studio monitors: Sveda Audio Dapo V2, in a d’Appolito configuration, in which the tweeter is placed centrally between a pair of parallel, identical mid-bass speakers with twin audio crossovers (Holmes 2006; Self 2018; Verdults 2019).
  • A subwoofer: Sveda Audio Wombat 15C V2, with a 15-inch aluminium-alloy diaphragm, tuned to the Sveda Audio Dapo V2 studio monitors.
  • Audio interface (serving as the sound card): Steinberg UR824.
  • Cables: Klotz MC5000.
  • Plugs: Neutrik NC3MXX and NC3FXX.
The volume of the musical pieces played was determined during preliminary test listening. The audio path gain was adjusted to a level that provided maximum comfort for the author during the perception of various musical pieces (Rosiński 2021). The maximum loudness in the loudest passages reached 86 phons, while the average loudness was approximately 72 phons. The sample rate and bit depth set on the audio interface during playback were 44.1 kHz and 16-bit, respectively.

4. Listening Method, Listening Sessions, and Audio Material

Due to the COVID-19 pandemic occurring during the research period, various libraries and record archives were closed. In order to overcome this obstacle, the author decided to use YouTube as a source of digital recordings, selecting the piece in the highest quality available, so that it could be shared with the online community:
Georg Philipp Telemann, Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro—available at https://www.youtube.com/watch?v=niLaRDMNmvo, time: 2:30–4:32 (accessed 25 January 2025).
Another motivation guiding the author in this case was to provide access to identical recordings for a large number of people at any time and from any location, thus enabling every reader of this article to compare the data presented here with the exact musical recording analysed by the author and, in doing so, to see the full picture. The prolonged pandemic, combined with variability in live performance and phonographic releases, led to the decision to use this method of freely available distribution in order to standardise and unify the recordings—not only in technical terms (from the perspective of sound engineering), but also with regard to
  • The diverse acoustics of different concert halls;
  • The sonorous inconsistency of various instruments;
  • Differences present in each performance and interpretation of musical works.
Aware of the numerous musical, non-musical, and psychological factors that influence the process of grouping sounds into perceptual streams, the author sought to minimise variables that could affect the research by influencing the formation of perceptual streams, thereby ensuring the most reliable results in the auditory scene analysis.
The listening sessions took place on 18 and 19 February 2022. Each session lasted no more than two hours. Between sessions, the author scheduled two-hour breaks—allowing time for rest and regeneration—to avoid fatigue during subsequent sessions. There were only two listening sessions per day to ensure a consistently high level of focus and attention, thereby reducing the risk of artefacts resulting from tiredness.
Before commencing the analysis, the piece was listened to in its entirety just once (only in the first session) to appreciate the musical work in a broader context. Following this initial listening, the most important information about the piece was noted.
In the second stage of the analysis, the musical work was played again from the beginning, but this time with the option to pause at any moment to record all observations. The author was not permitted to listen to the same fragment again (i.e., to repeat or rewind); playback could only be resumed from the current point. Most of the time, prior to pausing and noting observations from the auditory scene analysis, the author listened to four-bar sections.
In the next phase, this second stage was repeated: the piece was played once more from the beginning, with the option to pause at any time. This enabled the author to supplement the analysis with new observations or to expand the description with details that had not been noticed during earlier analyses.
The final part of the analysis was a repetition of the initial phase: the piece was played in its entirety, from beginning to end, without the possibility of pausing. This allowed the author to observe those perceptual processes that occurred globally throughout the piece, ensuring that the analysis offered a full and accurate picture.

5. Analysis of Georg Philipp Telemann’s Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro

Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro by G. P. Telemann is one of the most interesting musical works for illustrating the various ways in which perceptual streams are formed. In this analysis, only the part of the solo instrument—the recorder—was examined, as the basso continuo is interpreted throughout the entire composition as a single perceptual stream alongside the streams formed by the recorder.
From the anacrusis to the first measure (Figure 1), the listener hears the sequence of sounds (marked with a rectangle) as two perceptual streams. The first stream consists of a repeated high note, G5, while the second stream is formed from the notes lower in pitch than G5.
In the second measure (Figure 1), the listener perceives the sounds as a single stream. It is worth noting that a melody written in a certain way allows the solo instrument alone to generate two or more perceptual streams. Another notable phenomenon in this piece is the formation of specific auditory interpretations that preclude the creation of alternative mental figures or a different acoustic background.
In measures 3 and 4 (Figure 1), two perceptual streams are again perceptible, but this time constructed differently from those in the first measure. The first stream comprises the highest tones (marked with ellipses), which are also accented as the first notes in each group. The second perceptual stream emerges from runs of sixteenth notes that are very close to each other in terms of pitch (Baker et al. 2000; Barlach 2021; Bregman 2002; McGookin and Brewster 2004; Vecera 2000; van der Vlis 2013). This method of writing the melody facilitates the segregation of sounds and the emergence of two streams—one perceived very clearly as the main figure, and the other interpreted as a musical background.
From the fifth to the seventh measure (Figure 1 and Figure 2), the listener can discern yet another way in which two perceptual streams may form. The first note of each group (marked with ellipses), even though it is sometimes higher and sometimes lower than G5 (which forms the second stream), enables the listener’s mind to establish the first perceptual stream fully. This interpretation is the most common because
  • The first note of each sequence is the most accented (Fletcher and Munson 1937; Rogers and Bregman 1998);
  • The rapid repetition of G5 strongly segregates the sound material;
  • The quick succession of G5 notes makes it easier for the listener to distinguish the notes of different pitches.
In the eighth and ninth measures (Figure 2), the sounds marked with ellipses belong to one stream, as they are accented and represent the highest tones within each group (Cohen and Dubnov 1997; Humięcka-Jakubowska 2006). All other sounds, characterised by lower pitch and volume, are integrated into the second perceptual stream.
In measure 10 and the first half of the eleventh measure (Figure 2), the listener perceives only one perceptual stream.
In measures 11 and 12 (notes marked with a rectangle in Figure 2), the sequence splits into two distinct perceptual streams: the first stream consists of G5 notes, while the second is comprised of tones with a pitch lower than G5.
From measure 13 to the repeat sign (Figure 2), all notes form a single, coherent perceptual stream.
From the repeat sign to the end of measure 15 (Figure 2), the listener can once again focus on two perceptual streams: the first is formed by the repeating D5 tone (marked with a rectangle), while the second stream consists of the notes in the sequence that are positioned lower on the pitch scale compared to D5.
In measure 16 (Figure 3), the listener can once again integrate the heard sounds into a single perceptual stream, while measures 17 and 18 (together with the ellipses in Figure 3) are interpreted in the same way as the previously described measures 8 and 9 (Figure 2).
In measures 19 to 22 (Figure 3), despite the large number of notes and significant changes in the intervals between individual tones, the observed acoustic events allow only one perceptual stream to form in the listener’s mind because of the following reasons:
  • A greater diversity of tones with dissimilar pitches is required for the sequence to split into two perceptual streams.
  • In these measures, there is a constant continuation of the melodic line (at times descending, at others ascending), yet the listener’s mind still combines the incoming stimuli into a single stream (Jackendoff and Lerdahl 2006; Ockelford 2004; Palmer 2002; Shepard and Levitin 2002).
  • The larger intervals that occur between the tones are incidental in these measures; therefore, a second stream cannot form in the listener’s mind, as there are no sounds that would continue the pattern necessary for the formation of a perceptual stream.
In measures 23 to 26 (Figure 3 and Figure 4, with the ellipses), the listener receives and interprets the acoustic events in the same way as in measures 5 to 7 (Figure 1 and Figure 2).
From the second half of measure 26 to measure 27, the formation of only a single perceptual stream can once again be observed.
In measure 28 and the first half of measure 29 (Figure 4), the notes marked with a rectangle constitute the first stream, while the remaining tones form a distinct second perceptual stream.
From the latter part of measure 29 to the end of the movement, the listener once more combines all the heard sounds into a single cohesive perceptual stream.
This movement is particularly interesting from the perspective of perception for the following reasons:
  • The composer intertwines measures in which the formation of two perceptual streams occurs with those in which only one stream is present. This can produce intriguing perceptual impressions in the listener’s mind.
  • Several different types of sound segregation into two streams can be observed, even though the musical material itself is not particularly complex.
  • It demonstrates how a melody within a single voice can be constructed—using specific compositional techniques—to deliberately cause either the “tearing apart” of sounds (segregation into two perceptual streams) or their “integration” into a single perceptual stream.
  • This material serves as an excellent example for illustrating psychoacoustic phenomena related to auditory scene analysis, namely, the segregation and integration of sounds. It may provide listeners with a fundamental understanding of this perceptual phenomenon, making it a valuable educational resource for students or individuals undergoing psychoacoustic tests in the future.

6. Discussion of Results

The analysis presented here, based on listening to recordings in controlled acoustic conditions (a factor crucial for comprehensive understanding), concerns direct cognition as it relates to the arrangement of musical symbols—namely, score notations. Musical works considered within the framework of perceptual streaming present a unique phenomenon for this type of analysis, as they reveal certain listener potentials that warrant deeper understanding in order to be consciously utilised.
Laboratory tests, currently conducted by many researchers, are often carried out using completely different types of sound material—typically simple or complex, artificially generated tones. This poses a problem due to the markedly different timbre of acoustic instruments, which, through their vibrations, produce not only a primary tone but also a full spectrum of additional sounds, known in music as harmonics (Mizgalski 1959). Although psychoacoustics can generate such sound spectra, they are not highly relevant to music, as artificially generated tones lack the sonic qualities of acoustic instruments. A further issue, rarely addressed in psychoacoustic experiments, is the complexity arising from the diversity of instrument timbres, which are directly connected with dynamic and pitch variations in performance.
It should be emphasised that musical perception is a complex and multidimensional phenomenon. It is shaped by a diverse array of factors that interact at both individual and cultural levels. Among these, musical education (Jain et al. 2022; Rosiński 2020, 2021, 2023) plays a pivotal role. It not only enhances the listener’s auditory discrimination and analytical skills but also informs their expectations, habits of attention, and strategies for parsing musical textures. Similarly, innate or acquired auditory sensitivity—arising from physiological, psychological, or experiential influences—determines the threshold at which specific musical features are detected and interpreted.
Cultural background is equally significant, as it frames the aesthetic values, listening conventions, and cognitive schemata through which sound is organised and rendered meaningful. For example, listeners raised in different musical traditions may focus on distinct musical parameters—such as melody, harmony, rhythm, or timbre—and may be attuned to musical structures, scales, or expressive devices that are not universal, but specific to their own cultural milieu. This diversity underpins the remarkable variability in how music is perceived, interpreted, and emotionally experienced (Deutsch 1974, 1975, 1983, 1992).
Furthermore, various theoretical frameworks and psychoacoustic models offer alternative perspectives on the organisation and processing of auditory information. Some emphasise bottom-up mechanisms—such as the grouping of sounds based on proximity, similarity, or temporal coherence—while others foreground top-down influences, including learned expectations, memory, and attentional control. Recent research also underscores the importance of context, both musical and extramusical, in shaping the salience and perceptual integration of auditory streams.
Given these complexities, it is unsurprising that different listeners, even when exposed to identical musical material, may perceive and interpret auditory phenomena in ways that reflect their unique constellation of attributes, experiences, and cultural reference points. Therefore, the analytical method presented in this study constitutes only one of many possible approaches to investigating music perception. While it offers valuable insights into the processes of auditory scene analysis and perceptual streaming, it should be regarded as part of a broader framework of methodologies, each capable of illuminating different facets of the intricate interplay between sound, mind, and culture.

7. Conclusions

Identifying the perceptual patterns that listeners follow represents a significant research challenge not only for music theorists, composers, and sound engineers but also for musicologists, psychologists, and psychoacousticians. The interpretation of sounds received by the listener is of great importance for both the theoretical and practical analysis of a musical work and its artistic performance. This study demonstrates that the formation of perceptual streams is not confined to laboratory conditions; rather, such phenomena naturally occur in music and profoundly influence the listener’s interpretation of a piece.
Hearing alone does not provide the listener with much useful information; however, conscious sound perception through active listening yields a wealth of relevant data. For example, it enables us to determine whether a sound originates from nature, human activity, a warning signal, a psychoacoustic experiment, or, conversely, constitutes an aesthetic message encountered when listening to a musical work. The ability to recognise perceptual streams in music is innate to every human being (regardless of musical education), although the degree of its development depends on appropriate auditory training.
Developing auditory skills is a complex process that requires specialised and intentional training, enabling the listener to detect subtle modifications and, by recognising these variations, to perceive differences. Continuous training in auditory mental processing fosters a degree of neural plasticity (Kilian and Cichocka 2011; Leclerc et al. 2000; Pantev and Herholz 2011), which in turn promotes the development of sound perception across diverse contexts and enhances sensitivity to both common and distinctive sound elements. For musicians, the ability to observe and recognise various changes within musical works is particularly important and indeed indispensable.
It is also worth noting that acquiring new skills related to the processing of perceptual streams can have non-musical benefits (Parbery-Clark et al. 2009a, 2009b). Proper and conscious perception of sounds is of great significance in everyday life. Sonification—the science of the auditory representation of information and the effective decoding and interpretation of sound (Chartrand and Belin 2006; Strait et al. 2012)—has developed rapidly since the 1990s. Opportunities to improve auditory perception in this area may assist people in processing the multitude of sound stimuli encountered daily in the environment (Kraus and Chandrasekaran 2010; Parbery-Clark et al. 2012), as hearing holds considerable potential in this regard. The manner in which speech and environmental sounds—whether originating from nature or human activity—are processed and encoded (Fine and Moore 1993; Jacobson et al. 2003; Meyer et al. 2011; Oxenham et al. 2003; Rammsayer and Altenmüller 2006; Schulze et al. 2011) frequently leads us to rely on our auditory system even subconsciously (Bregman 1990; Humięcka-Jakubowska 2006; Załazińska 2016). The greater the quantity of information we acquire within a given auditory range, the more we are able not only to understand, but also to hear—by becoming familiar with the mechanisms that influence the listener, we may consciously strive to achieve our own desired perceptual effects. Both this analysis and related research demonstrate that the human auditory system is an exceptionally precise organ, although its functioning remains not yet fully discovered or understood.
I hope that all musicians, equipped with knowledge of the laws governing sound perception, will be able to realise their musical intentions in a more conscious, and—most importantly—effective and creative manner, as they begin to pay attention to aspects of mental sound processing previously unknown to them.
This paper does not claim to offer a final definition of the phenomenon of perceptual streams or their interpretation by listeners; rather, it represents an attempt to gather and present evidence of their existence in music. The analysis presented here certainly does not include all references to the various forms of perceptual streams identified in psychology, acoustics, music theory, and musicology—nor is that its intention. It is important to remember that music is not merely sound, but also emotion and spiritual experience, expressed through interpretation. When multiple sounds resonate simultaneously and overlap, they give rise to diverse acoustic phenomena. These phenomena constitute the distinctive sonoristic qualities of a given musical work, which in turn evoke various perceptual impressions—impressions that must be discovered individually by each listener.
It is also worth recalling that music is not just an isolated sound but a vehicle for emotions and spiritual experiences. The coexistence and interaction of overlapping sounds, forming a unified whole, create the unique sound qualities of a given piece and give rise to a variety of perceptual sensations. When sound is considered solely in the context of physics (acoustics) or psychology, it holds little value in the artistic domain, as the languages of physics and psychology are quite removed from the language of music itself (Rosiński 2020). Therefore, a multidimensional approach to the cognition of a musical work appears to be the most effective way to achieve a comprehensive understanding of any given piece.
The method of auditory analysis presented here demonstrates how much remains to be discovered and understood in this field. Analysts frequently seek to answer the fundamental question: how do we perceive various sound structures? A certain transience of observation or thought is particularly characteristic of music, as such issues do not arise to the same extent in, for example, the fine arts. The impressions a listener derives from experiencing music can be at least partially understood through an elementary study of the reception of perceptual streams, which—under various circumstances and depending on each listener’s unique conditions (such as focus or the order in which musical components are interpreted at a given moment) (Humięcka-Jakubowska 2006)—can form appropriate musical structures (Bregman and Ahad 1996) http://webpages.mcgill.ca/staff/Group2/abregm1/web/downloadstoc.htm (accessed 12 April 2025).
Interfering with sound material through in-depth auditory scene analysis enables the discovery of various psychoacoustic phenomena experienced by listeners, regardless of their specific approach to cognition. Research in this area may significantly broaden the boundaries of understanding and deepen knowledge of human auditory skills and abilities as they manifest during the reception of musical works. The perceptual phenomena previously described, when introduced into the field of music theory, allow musical works to be explored from new and previously uncharted research perspectives.
Musicians, sound engineers, and researchers can benefit from understanding how perceptual streams function, as it enables a more nuanced approach to sound design, musical performance, and the study of auditory perception. By dissecting how listeners separate and integrate various sound elements, this analysis paves the way for more sophisticated auditory scene designs that can enhance the listening experience—whether in a concert hall, a recording studio, or everyday environments. Furthermore, this research highlights the importance of auditory training, suggesting that the ability of listeners to discern and appreciate complex musical textures can be cultivated, thereby enhancing their overall auditory perception.
These findings also open up broader implications beyond music. Understanding the cognitive and perceptual mechanisms involved in auditory streaming can contribute to improved designs in fields such as auditory warning systems, hearing aids, and educational tools that employ sound to facilitate learning. By recognising the patterns and processes inherent in auditory scene analysis, researchers can develop strategies to improve the transmission of information through sound, whether for purposes of safety, communication, or entertainment.
Ultimately, this study reaffirms that music is a dynamic and complex interplay of perceptual streams that engage listeners on multiple levels. The ability of listeners to navigate these streams is a testament both to the sophistication of human auditory processing and to the skill of composers, who intuitively understand and manipulate these perceptual phenomena. While much has already been uncovered, a vast expanse remains to be explored. Further research could investigate how different musical styles, cultural backgrounds, and auditory environments affect the formation and interpretation of perceptual streams, offering new insights into the universality and variability of musical experience.
In conclusion, the perception of auditory streams remains a rich area of study at the intersection of music theory, psychoacoustics, cognitive science, and auditory engineering. By exploring how listeners naturally organise sound, we gain not only a deeper appreciation of the listening experience but also valuable tools for enhancing musical communication, creativity, and understanding.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Baker, Kevin L., Sheila M. Williams, and Roderick I. Nicolson. 2000. Evaluating Frequency Proximity in Stream Segregation. Perception & Psychophysics 62: 81–88. [Google Scholar] [CrossRef]
  2. Barlach, Laura. 2021. The Mindset of Innovation: Contributions of Cognitive and Social Psychology. Journal of Psychological Research 3: 16–24. [Google Scholar] [CrossRef]
  3. Bregman, Albert S. 1990. Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, MA: The MIT Press. [Google Scholar] [CrossRef]
  4. Bregman, Albert S. 2002. The auditory scene. In Foundations of Cognitive Psychology. Edited by Daniel J. Levitin. Cambridge, MA: MIT Press, pp. 213–48. [Google Scholar]
  5. Bregman, Albert S. 2004. Auditory Scene Analysis. In International Encyclopedia of the Social and Behavioral Sciences. Edited by Neil J. Smelzer and Paul B. Baltes. Amsterdam: Elsevier, pp. 940–42. [Google Scholar]
  6. Bregman, Albert S., and Pierre A. Ahad. 1996. Demonstrations of Auditory Scene Analysis: The Perceptual Organization of Sound. Available online: http://webpages.mcgill.ca/staff/Group2/abregm1/web/downloadstoc.htm (accessed on 12 April 2025).
  7. Bregman, Albert S., Connie Colantonio, and Pierre A. Ahad. 1999. Is a common grouping mechanism involved in the phenomena of illusory continuity and stream segregation? Perception & Psychophysics 61: 195–205. [Google Scholar] [CrossRef]
  8. Chartrand, Jan-Pierre, and Pascal Belin. 2006. Superior voice timbre processing in musicians. Neuroscience Letters 405: 164–67. [Google Scholar] [CrossRef]
  9. Cohen, Dalia, and Shlomo Dubnov. 1997. Gestalt phenomena in musical texture. In Music, Gestalt, and Computing. Studies in Cognitive and Systematic Musicology. Edited by Marc Leman. Berlin and Heidelberg: Springer, pp. 386–405. [Google Scholar] [CrossRef]
  10. Collection Musicale en Format Numérique. n.d. Georg Philipp Telemann—Sonate C-Dur. Available online: https://s9.imslp.org/files/imglnks/usimg/1/10/IMSLP330401-PMLP41848-Telemann_GP_-_Sonate_C-Dur,_TWV_41,C2_-_EN2014-105.PDF (accessed on 25 January 2025).
  11. Deutsch, Diana. 1974. An Auditory Illusion. Journal of the Acoustical Society of America 55: 18–19. [Google Scholar] [CrossRef]
  12. Deutsch, Diana. 1975. Musical Illusions. Scientific American 233: 92–105. [Google Scholar] [CrossRef]
  13. Deutsch, Diana. 1983. The Octave Illusion in Relation to Handedness and Familial Handedness Background. Neuropsychologia 21: 290–92. [Google Scholar] [CrossRef]
  14. Deutsch, Diana. 1992. Paradoxes of Musical Pitch. Scientific American 267: 88–95. [Google Scholar] [CrossRef]
  15. Fine, Philip A., and Brian C. J. Moore. 1993. Frequency analysis and musical ability. Music Perception 11: 39–54. [Google Scholar] [CrossRef]
  16. Fletcher, Harvey, and Wilden A. Munson. 1937. Relation between loudness and masking. The Journal of the Acoustical Society of America 9: 78. [Google Scholar] [CrossRef]
  17. Holmes, Thom. 2006. The Routledge Guide to Music Technology. New York: Routledge. [Google Scholar] [CrossRef]
  18. Humięcka-Jakubowska, Justyna. 2006. Scena Słuchowa Muzyki Dwudziestowiecznej. Poznań: Rhythmos. [Google Scholar]
  19. Jackendoff, Ray, and Fred Lerdahl. 2006. The Capacity for Music: What is it, and What’s Special About it? Cognition 100: 33–72. [Google Scholar] [CrossRef]
  20. Jacobson, Loma S., Lola L. Cuddy, and Andera R. Kilgour. 2003. Time tagging: A key to musician’s superior memory. Music Perception 20: 307–13. [Google Scholar] [CrossRef]
  21. Jain, Saransh, Nuggehalli Puttareviyah Nataraja, and Vijaya Kumar Narne. 2022. The Effect of Subjective Fatigue on Auditory Processing in Musicians and Nonmusicians. Music Perception 39: 309–19. [Google Scholar] [CrossRef]
  22. Kilian, Marlena, and Małgorzata Cichocka. 2011. Muzykoterapia w rehabilitacji dzieci niewidomych i słabo widzących—Założenia teoretyczne (część 1). Szkoła Specjalna 4: 245–57. [Google Scholar]
  23. Kraus, Nina, and Bharath Chandrasekaran. 2010. Music training for the development of auditory skills. Nature Reviews Neuroscience 11: 599–605. [Google Scholar] [CrossRef] [PubMed]
  24. Leclerc, Charles, Dave Saint-Amour, Marc E. Lavoie, Maryse Lassonde, and Franco Lepore. 2000. Brain functional reorganization in early blind humans revealed by auditory event-related potentials. NeuroReport 11: 545–50. [Google Scholar] [CrossRef]
  25. McGookin, David K., and Stephen A. Brewster. 2004. Understanding concurrent earcons: Applying auditory scene analysis principles to concurrent earcon recognition. ACM Transactions on Applied Perception 1: 130–55. [Google Scholar] [CrossRef]
  26. Meyer, Martin, Stefan Elmer, Maya Ringli, Mathias S. Oechslin, Simon Baumann, and Lutz Jancke. 2011. Long-term exposure to music enhances the sensitivity of the auditory system in children. European Journal of Neuroscience 34: 755–65. [Google Scholar] [CrossRef]
  27. Micheyl, Christophe, Robert P. Carlyon, Rhodri Cusack, and Brian C. J. Moore. 2005. Performance measures of auditory organization. In Physiology, Psychoacoustics, and Models. Edited by Daniel Pressnitzer, Alain de Cheveigne, Stephen McAdams and Lionel Collet. New York: Springer, pp. 202–10. [Google Scholar] [CrossRef]
  28. Mizgalski, Gerard. 1959. Podręczna Encyklopedia Muzyki Kościelnej. Poznań, Warszawa and Lublin: Księgarnia Św. Wojciecha. [Google Scholar]
  29. Moore, Brian C. J., and Hedwig E. Gockel. 2012. Properties of auditory stream formation. Philosophical Transactions of the Royal Society 367: 919–31. [Google Scholar] [CrossRef]
  30. Nęcka, Edward, Jarosław Orzechowski, and Błażej Szymura. 2006. Psychologia Poznawcza. Warszawa: Akademica Wydawnictwo SWPS/PWN. [Google Scholar]
  31. Ockelford, Adam. 2004. On Similarity, Derivation and The Cognition of Musical Structure. Psychology of Music 32: 23–75. [Google Scholar] [CrossRef]
  32. Oxenham, Andrew J., Brian J. Fligor, Christine R. Mason, and Gerald Kidd, Jr. 2003. Informational masking and musical training. Journal of the Acoustical Society of America 114: 1543–49. [Google Scholar] [CrossRef] [PubMed]
  33. Palmer, Stephen E. 2002. Organizing objects and scenes. In Foundations of Cognitive Psychology. Edited by Daniel J. Levitin. Cambridge, MA: MIT Press, pp. 189–211. [Google Scholar]
  34. Pantev, Christo, and Sybille C. Herholz. 2011. Plasticity of the human auditory cortex related to musical training. Neuroscience and Biobehavioral Reviews 35: 2140–54. [Google Scholar] [CrossRef]
  35. Parbery-Clark, Alexandra, Adam Tierney, Dana L. Strait, and Nina Kraus. 2012. Musicians have fine-tuned neural distinction of speech syllables. Neuroscience 219: 111–19. [Google Scholar] [CrossRef] [PubMed]
  36. Parbery-Clark, Alexandra, Erika Skoe, and Nina Kraus. 2009a. Musical experience limits the degradative effects of background noise on the neural processing of sound. Journal of Neuroscience 29: 14100–7. [Google Scholar] [CrossRef]
  37. Parbery-Clark, Alexandra, Erika Skoe, Carrie Lam, and Nina Kraus. 2009b. Musician enhancement for speech-in-noise. Ear and Hearing 30: 653–61. [Google Scholar] [CrossRef]
  38. Rammsayer, Thomas, and Eckart Altenmüller. 2006. Temporal information processing in musicians and nonmusicians. Music Perception 24: 37–48. [Google Scholar] [CrossRef]
  39. Rogers, Wendy L., and Albert S. Bregman. 1998. Cumulation of the tendency to segregate auditory streams: Resetting by changes in location and loudness. Perception & Psychophysics 60: 1216–27. [Google Scholar] [CrossRef]
  40. Rosiński, Adam. 2020. Perception of Sound via Auditory Image Analysis, Made in the Conceptual Context of the Philosophy of Music. In Humanitarian Corpus. Kyiv: M.P. Drahomanov National University of Pedagogy Academic Press, Vinnytsia: TVORY, Vol. 2, Issue 35. pp. 101–7. [Google Scholar]
  41. Rosiński, Adam. 2021. Wpływ wykształcenia muzycznego na grupowanie dźwięków sekwencji ABA-ABA w rytm galopujący. In Przestrzenie Akustyki. Professional Acoustics. Edited by Adam Rosiński. Olsztyn: Wydawnictwo Uniwersytetu Warmińsko-Mazurskiego w Olsztynie, pp. 53–70. [Google Scholar]
  42. Rosiński, Adam. 2023. Influence of Music Education and Pitch Scales on the Grouping of the AB-AB Sequence Sounds. In Przestrzenie Akustyki. Professional Acoustics. Edited by Adam Rosiński. Olsztyn: Wydawnictwo Uniwersytetu Warmińsko-Mazurskiego w Olsztynie, vol. 2, pp. 53–74. [Google Scholar]
  43. Schulze, Katrin, Stefan Zysset, Karsten Mueller, Angela D. Friederici, and Stefan Koelsch. 2011. Neuroarchitecture of verbal and tonal working memory in nonmusicians and musicians. Human Brain Mapping 32: 771–83. [Google Scholar] [CrossRef] [PubMed]
  44. Self, Douglas. 2018. The Design of Active Crossovers, 2nd ed. New York: Routledge. [Google Scholar] [CrossRef]
  45. Shamma, Shihab A., and Christophe Micheyl. 2010. Behind the scenes of auditory perception. Current Opinion in Neurobiology 20: 361–66. [Google Scholar] [CrossRef]
  46. Shamma, Shihab A., Mounya Elhilali, and Christophe Micheyl. 2011. Temporal coherence and attention in auditory scene analysis. Trends in Neurosciences 34: 114–23. [Google Scholar] [CrossRef]
  47. Shepard, Roger N., and Daniel J. Levitin. 2002. Cognitive psychology and music. In Foundations of Cognitive Psychology. Edited by Daniel J. Levitin. Cambridge: MIT Press, pp. 503–14. [Google Scholar]
  48. Snyder, Joel S., Suh-Kyung Lee, Olivia L. Carter, Erin E. Hannon, and Claude Alain. 2008. Effects of context on auditory stream segregation. Journal of Experimental Psychology: Human Perception & Performance 34: 1007–16. [Google Scholar] [CrossRef]
  49. Steiger, Howard, and Albert S. Bregman. 1982. Negating the effects of binaural cues: Competition between auditory streaming and contralateral induction. Journal of Experimental Psychology: Human Perception and Performance 8: 602–13. [Google Scholar] [CrossRef] [PubMed]
  50. Strait, Dana L., Alexandra Parbery-Clark, Emily Hittner, and Nina Kraus. 2012. Musical training during early childhood enhances the neural encoding of speech in noise. Brain & Language 123: 191–201. [Google Scholar] [CrossRef]
  51. van der Vlis, Bram. 2013. Semantic Connections. Explorations, Theory and a Framework for Design. Eindhoven: Technical University Eindhoven. Available online: https://pure.tue.nl/ws/portalfiles/portal/4040718/762658.pdf (accessed on 10 April 2025).
  52. Vecera, Shaun P. 2000. Toward a biased competition account of object-based segregation and attention. Brain and Mind 1: 353–84. [Google Scholar] [CrossRef]
  53. Verdults, Vincent. 2019. Optimal Audio and Video Reproduction at Home: Improving the Listening and Viewing Experience. New York: Routledge. [Google Scholar] [CrossRef]
  54. Winkler, István, Susan L. Denham, and Israel Nelken. 2009. Modeling the Auditory Scene: Predictive Regularity Representations and Perceptual Objects. Trends in Cognitive Sciences 13: 532–40. [Google Scholar] [CrossRef]
  55. Załazińska, Aneta. 2016. Obraz, Słowo, Gest. Kraków: Wydawnictwo Uniwersytetu Jagiellońskiego. [Google Scholar]
Figure 1. Georg Philipp Telemann, Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro, mm. 1–7. (Collection Musicale en Format Numérique n.d.).
Figure 1. Georg Philipp Telemann, Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro, mm. 1–7. (Collection Musicale en Format Numérique n.d.).
Arts 14 00076 g001
Figure 2. Georg Philipp Telemann, Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro, mm. 8–17. (Collection Musicale en Format Numérique n.d.).
Figure 2. Georg Philipp Telemann, Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro, mm. 8–17. (Collection Musicale en Format Numérique n.d.).
Arts 14 00076 g002
Figure 3. Georg Philipp Telemann, Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro, mm. 16–24. (Collection Musicale en Format Numérique n.d.).
Figure 3. Georg Philipp Telemann, Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro, mm. 16–24. (Collection Musicale en Format Numérique n.d.).
Arts 14 00076 g003
Figure 4. Georg Philipp Telemann, Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro, mm. 25–30. (Collection Musicale en Format Numérique n.d.).
Figure 4. Georg Philipp Telemann, Sonata in C Major for Recorder and Basso Continuo, TWV 41:C2, Allegro, mm. 25–30. (Collection Musicale en Format Numérique n.d.).
Arts 14 00076 g004
Table 1. Average reverberation time values [s] for the following frequencies: 63 Hz, 125 Hz, 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, and 8000 Hz.
Table 1. Average reverberation time values [s] for the following frequencies: 63 Hz, 125 Hz, 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, and 8000 Hz.
Sound Source LocationFrequency Band [Hz]
631252505001000200040008000
10.500.450.520.480.420.440.470.45
20.460.460.510.480.420.450.450.42
30.500.480.520.450.440.430.420.43
40.520.450.530.450.470.500.520.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rosiński, A. An Analysis of Audio Information Streaming in Georg Philipp Telemann’s Sonata in C Major for Recorder and Basso Continuo, Allegro (TWV 41:C2). Arts 2025, 14, 76. https://doi.org/10.3390/arts14040076

AMA Style

Rosiński A. An Analysis of Audio Information Streaming in Georg Philipp Telemann’s Sonata in C Major for Recorder and Basso Continuo, Allegro (TWV 41:C2). Arts. 2025; 14(4):76. https://doi.org/10.3390/arts14040076

Chicago/Turabian Style

Rosiński, Adam. 2025. "An Analysis of Audio Information Streaming in Georg Philipp Telemann’s Sonata in C Major for Recorder and Basso Continuo, Allegro (TWV 41:C2)" Arts 14, no. 4: 76. https://doi.org/10.3390/arts14040076

APA Style

Rosiński, A. (2025). An Analysis of Audio Information Streaming in Georg Philipp Telemann’s Sonata in C Major for Recorder and Basso Continuo, Allegro (TWV 41:C2). Arts, 14(4), 76. https://doi.org/10.3390/arts14040076

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop