Next Article in Journal
Korean Version of the Nursing Student Attitudes and Knowledge toward Lesbian, Gay, Bisexual, and Transgender Patients Scale
Next Article in Special Issue
Listening and Processing Skills in Young School Children with a History of Developmental Phonological Disorder
Previous Article in Journal
An Assessment of Knowledge, Attitudes and Practices on Pureed Diet Preparation (KAP DYS-PUREE) among Food Handlers in Malaysian Hospitals for Dysphagia Management
Previous Article in Special Issue
Ambient Noise in Candidate Rooms for User-Operated Audiometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Auditory Processing in Musicians, a Cross-Sectional Study, as a Basis for Auditory Training Optimization

by
Maria Kyrtsoudi
1,*,
Christos Sidiras
1,
Georgios Papadelis
2 and
Vasiliki Maria Iliadou
1
1
Clinical Psychoacoustics Laboratory, 3rd Psychiatric Department, Neurosciences Sector, Medical School, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
2
School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, 57001 Thermi, Greece
*
Author to whom correspondence should be addressed.
Healthcare 2023, 11(14), 2027; https://doi.org/10.3390/healthcare11142027
Submission received: 5 May 2023 / Revised: 26 June 2023 / Accepted: 12 July 2023 / Published: 14 July 2023
(This article belongs to the Special Issue Auditory Processing Disorder: A Forgotten Hearing Impairment)

Abstract

:
Μusicians are reported to have enhanced auditory processing. This study aimed to assess auditory perception in Greek musicians with respect to their musical specialization and to compare their auditory processing with that of non-musicians. Auditory processing elements evaluated were speech recognition in babble, rhythmic advantage in speech recognition, short-term working memory, temporal resolution, and frequency discrimination threshold detection. All groups were of 12 participants. Three distinct experimental groups tested included western classical musicians, Byzantine chanters, and percussionists. The control group consisted of 12 non-musicians. The results revealed: (i) a rhythmic advantage for word recognition in noise for classical musicians (M = 12.42) compared to Byzantine musicians (M = 9.83), as well as for musicians compared to non-musicians (U = 120.50, p = 0.019), (ii) better frequency discrimination threshold of Byzantine musicians (M = 3.17, p = 0.002) compared to the other two musicians’ group for the 2000 Hz region, (iii) statistically significant better working memory for musicians (U = 123.00, p = 0.025) compared to non-musicians. Musical training enhances elements of auditory processing and may be used as an additional rehabilitation approach during auditory training, focusing on specific types of music for specific auditory processing deficits.

1. Introduction

Current research provides evidence of enhanced auditory processing in musicians, compared to non-musicians. Capitalizing on this neuroplasticity-based improvement may lead to more focused auditory training for individuals with Auditory Processing Disorder (APD) with the aim of better results and faster rehabilitation. Neuroplasticity, in this case, is the nervous system adaptation resulting from an active response to auditory stimuli. It involves connectivity changes for better performance, especially in related tasks [1]. Hearing is a prerequisite for communication, work, and learning for the average person as well as an essential sense for every musician. Hearing being evaluated by the gold standard pure-tone audiometry may be missing aspects of hearing that are important for everyday life [2]. An audiological evaluation may include speech audiometry as well as tympanometry, stapedial reflexes, otoacoustic emissions, and auditory brainstem responses depending on symptoms and the medical history of a given patient. Communication through the auditory modality needs intact temporal processing, speech in noise perception, working memory, and frequency discrimination [3,4]. Auditory processing happens at the level of the central auditory nervous system. Hearing (i.e., hearing sensitivity and auditory processing) contributes to the formation of cognition, and cognition contributes to hearing [4,5]. The superior auditory processing performance in musicians vs. non-musicians is explained by the enhanced usage and training of their hearing sense, emotion, and listening skills [6]. Musical training goes beyond auditory training to reading and comprehending complex symbols into motor activity [7]. Of interest, recent research shows that frequency precision is more correlated with musical sophistication than cognition [8].
The perception of music and speech is thought to be distinct, although sharing many acoustic and cognitive characteristics [9]. Pitch, timing, and timbre cues may be considered commonalities for auditory information transfer [10]. Memory and attention are required cognitive skills for both music and speech processing. Pitch is the psychoacoustic analogous of the frequency of the sound. Timing refers to specific turning points in the sound (for example, the beginning and the negation of the sound), and timbre is multidimensional and includes spectral and temporal features. Musicians’ superior auditory processing is attributed to enhanced accuracy of neural sound encoding [9,11,12,13] as well as better cognitive function [14,15]. The musical practice embraces the experience of specific sound ingredients as well as joint integration during the performance. Extracting meaning from a complex auditory scene may be a transferable skill to tracking a talker’s voice in a noisy environment [16].
Musicians are in an advantageous position in processing the pitch, timing, and timbre of music compared to non-musicians [17]. They demonstrate strengthened neural encoding of the timbre of their own instrument [18,19,20], but also show enhancements in processing speech [9,21,22,23,24,25] and non-verbal communication sounds [26]. Musical experience promotes a more accurate perception of meaningful sounds in communication contexts other than musical ones [9,12,23,27]. Music training is reported to change brain areas in a specific way that may be predicted by the performance requirements of the specific training instrument [28]. Musicians’ perceptual skills are influenced by the style of music played by them [29,30].
Auditory processing [31] consists of mechanisms that analyze, preserve, organize, modify, refine, and interpret information from the auditory signal. Skills that support these mechanisms are auditory discrimination, temporal and binaural processing, which are known as auditory processing elements. Temporal processing refers to auditory pattern recognition and temporal aspects of audition, divided into four subcomponents: temporal integration, temporal resolution/discrimination (e.g., gap detection), temporal ordering and temporal masking [32]. Sound localization and lateralization and auditory performance with challenging or degraded acoustic signals (including dichotic listening) [33] are included in binaural processing. Auditory discrimination involves the perception of acoustic stimuli in very rapid succession requiring the accuracy of information that is carried to the brain [34,35]. These processes may affect phoneme discrimination, speech in noise comprehension, duration discrimination, rhythm perception, and prosodic distinction [36,37]. Temporal resolution, defined as the shortest period over which the ear can discriminate two signals [38] may be linked to language acquisition and cognition in both children [39,40,41,42] and adults [43,44,45,46].
American Speech Language Hearing Association (ASHA) uses the term Central Auditory Processing Disorder (CAPD) to refer to deficits in neural processing, including bottom–up and top–down neural connectivity [47] of auditory information in the Central Auditory Nervous System (CANS) not as a consequence of cognition or higher order language [33]. Deficits in auditory information processing in the central nervous system (CNS) are demonstrated by poor performance in one or more elements of auditory processing [48]. (C)APD may coexist with, but is not derived from, dysfunction in other modalities. Despite the absence of any substantial audiometric findings, poor hearing and auditory comprehension are expressed in some cases in CAPD. Moreover, (C)APD can be associated with, co-exist or lead to difficulties in speech, language, attention, social, learning (e.g., spelling, reading), learning, and developmental functions [33,49]. In the international statistical classification of diseases and related health problems, 11th edition (ICD-11), auditory processing disorder (APD) is classified as AB5Y as a hearing impairment. (C)APD affects both children and adults, including the elderly [50], and it is linked to functional disorders beyond the cochlea [51,52]. According to WHO [49] prevalence estimates of APD in children range from 2–10% and can affect psychosocial development, academic achievement, social participation, and career opportunities.

1.1. Speech Perception in Noise

Speech perception in noise is at the core of auditory processing as the most easily explainable test with a real-life depiction. Temporal elements required to perceive speech may be similar to those needed for music with rhythm thought to stand as a bridge between speech and music [52]. Highly trained musicians have been reported in some studies to have superior performance on different measures of speech in noise [22,52,53,54] with this advantage not always being present [55,56,57].
The consolidating of the possible improved speech in noise perception of musicians may have rehabilitation implications for individuals with hearing impairment [55]. Research outcomes reveal that rhythm perception benefits are present at different levels of speech from words to sentences [58,59]. Percussionists were found to perform relatively better at the sentence in noise level compared to the words in noise one contrasted with vocalists while significantly outperforming non-musicians [52]. There is limited research evaluating speech perception in noise among musicians from different musical styles.

1.2. Temporal Resolution

Auditory temporal processing is the alteration of elements of duration within a specific time interval [50]. The ability of the auditory system to respond to rapid changes over time is a component of temporal processing called temporal resolution, linked to stopping consonants perception during a running speech [37,60].
Temporal processes are necessary for auditory processing and the perception of rhythm, pitch, duration, and separating foreground to background [3,36]. Chermak and Musiek [37] highlighted the role of temporal processing across a range of language processing skills, from phonemic to prosodic distinctions and ambiguity resolution. Temporal resolution underlies the discrimination of voiced from unvoiced stop consonants [61] and is clinically evaluated using Gaps-In-Noise [GIN] or Random Gap Detection Test [RGDT] [62]. Evaluating an individual’s ability to perceive a msec gap in noise or between two pure tones provides information on possible deficits in temporal resolution and can lead to a better shaping of rehabilitation [50]. Older adults generally are found to have poorer (longer) gap thresholds than younger adults [5].
Early exposure to frequent music training for years improves timing ability across sensory modalities [63]. Musicians present with better temporal resolution [64,65,66,67,68]. Musicians of different instruments and styles were found to have superior timing abilities compared to non-musicians [65,69]. Longer daily training in music leads to a better gap detection threshold [70]. Neuroplasticity as a result of music training results in enhanced temporal resolution in children that are comparable to adults [66]. To our knowledge, no research publications exist evaluating possible differences in temporal resolution across musicians from different musical styles.

1.3. Working Memory

Auditory and visual memory skills are enhanced in musicians and linked with early, frequent, and formal musical training [4,22,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86]. In rare cases, no difference is documented [55] between musicians and non-musicians. A meta-analysis reported a medium effect on short-term working memory with musicians being better. The advantage was large with tonal stimuli, moderate with verbal stimuli, and small or null with visuospatial stimuli [82]. This points to an auditory-specific working memory advantage rather than a more general one. Working memory improves due to auditory processing being enhanced through music education; hearing improves cognition.

1.4. Frequency Discrimination

During speech processing, the pitch has hyper-linguistic characteristics that provide information on emotion and intent [87] as well as linguistic characteristics. Musicians outperform non-musicians [13,22,65,69,84,88,89,90,91,92,93,94]. This advantage was hypothesized to be a contributing factor in better speech-in-noise perception found in musicians [22]. Classical musicians were reported to have superior frequency discrimination abilities when compared to those with contemporary music (e.g., jazz, modern) background [95]. To our knowledge, there is no study researching possible differences across different musical styles that include Byzantine music.

1.5. Different Music Styles and Instruments

The musicians’ groups selected for the present study differ in styles and music training. Byzantine music (BM), or Byzantine chant (BC), is the traditional ecclesiastical music of the Orthodox church. It is vocal music sung by one or more chanters [96] always having a monophonic character based on eight modes (“echos”) [97]. The chanters are usually male and there is no musical instrument involved apart from the human voice [98,99]. This is in contrast with Western classical music that is polyphonic, frequently including male and female voices in the presence of instruments. Percussionists are vastly trained in rhythmic skills and timing physical flexibility and in this research are experienced in both tuned and untuned percussions.
The ordinary tuning system for Western music is the 12 equal temperament tuning system which subdivides the octave interval into 12 tones (semitones) [99]. By contrast, the BC tuning system divides the octave into 72 equal subdivisions or “moria”, according to the Patriarchal Music Committee (PMC) [100]. In comparison to Western music, where the octave is based on 12 equal units (semitones), BM has each semitone corresponding to 6 moria [96]. The elementary tone (a minor second) consists of 100 logarithmically equal micro-intervals called cents; thus, the octave consists of (12 semitones × 100 cents) 1200 cents [101]. PMC’s musician experts indicate [100] that the less audible music interval is considered to be 1 m or 16.7 c, a critically smaller interval relevant to classical music. Likewise, Sunberg [102] argues that an interval of 20 cents (1.2 moria) is hardly heard by a listener. In BM, each micro-interval differs from its neighbors by at least 2 moria [99] and the frequency steps made in Byzantine music, compared to Western music, may vary from even 1 Hz in the bass voice range.
In the literature, superior auditory processing abilities are documented for musicians on speech in noise recognition, temporal resolution, frequency discrimination, and working memory. This auditory advantage in musicians is possibly better explained as a result of neuroplasticity through music training rather than better genes as shown by previous research [103]. Could it be that this neuroplasticity is influenced by different instruments and/or musical styles? This study aims (i) to evaluate possible differences across different instrument musicians and/or different musical styles. Could different musical instruments or styles of training lead to enhanced auditory processing in specific elements that might not be the same across different style of musicians? If this is proven to be the case, it would provide more insights toward more individualized rehabilitating approaches for individuals with deficits in auditory processing. A secondary aim is to to verify that musicians have better auditory processing skills compared to non-musicians, specifically when examined with the auditory processing diagnostic test battery used in the Greek population.

2. Material and Methods

2.1. Participants

In the present study, 36 Greek professional musicians participated, divided into three groups according to specialization: 12 in byzantine music (four females), 12 in Western classical music (seven females), and 12 in percussion (four females). Musicians were performing music at a professional level with at least 10 years of musical experience (M = 27.58, SD = 10.83). Classical musicians specialized in different kinds of instruments (3 guitarists, 2 pianists, 1 clarinetist, 2 flutists, 2 violinists, 1 classical singer, and 1 trumpetist) apart from percussion. Percussionists, on the other hand, specialized both in tuned and untuned percussion. The control group comprised 12 non-musicians (10 females). The non-musician group did not get any formal music education apart from music lessons in primary and secondary mainstream education. The three experimental groups and one control group did not differ in average age (Table 1). All participants were selected by word of mouth, via Facebook posts and did not have a diagnosed neurological, language, or attention disorder as per their report. They all signed informed consent before testing. All procedures were approved by the Ethics and Bioethics Committee of the Aristotle University of Thessaloniki (6613/14 June 2022). There were no exclusion criteria as the study had to be concluded in a limited time as part of a master’s degree and the aim was to document auditory processing in as many musicians and controls as possible.

2.2. Procedure

Auditory Processing Tests were administered in a randomized order and included the Speech in Babble test (SinB) [104,105], Gaps-In-Noise (GIN) [50,106], Digit Span [107], Word Recognition Rhythm Component test (WRRC) [58,59], and Frequency Discrimination Limen test (DFL). These tests were administered to all musicians and non-musicians, to assess speech perception, temporal resolution, short-term and working memory, and speech comprehension with rhythm effect. We also conducted a Frequency Discrimination Limen test (DFL) which was created based on other Frequency Discrimination Limen and Just Noticeable Difference tests [91,108,109,110]. All tests were administered to all participants in a sound-treated room via headphones (TDH-50P) at 60 dB HL through a CD player and a GSI 61 audiometer.
Pure-tone audiometry using the same audiometer and headphones was implemented for all participants evaluating frequencies 250 Hz, 500 Hz, 1 KHz, 2 KHz, 4 KHz, and 8 kHz for each ear by an ENT consultant in a sound-treated booth following otoscopy. In case any other audiological evaluation was deemed necessary it was also implemented. Forty-five of them had normal hearing as defined by a measured threshold being better equal to or better than 20 dB HL for each of the tested frequencies in each ear. Three had elevated hearing thresholds as a result of sensorineural high-frequency hearing loss (Table 2). For the auditory processing evaluation, we ensured that all participants had fully understood the given instructions, besides having successfully completed the practice items of each test, before initiating the standard test procedure. All participants were tested in a soundproof booth. Auditory Processing Tests were presented in each ear at 60 dB HL [111]. All auditory processing tests are presented at a suprathreshold level, i.e., the average everyday intensity level of human communication during running speech. For the three individuals with sensorineural high-frequency hearing loss, the average threshold was abnormal only for one of the ears, the right one, having a 1.7 dB HL deviation from the upper normal limit. This being the case, the intensity at which the auditory processing tests were administered was not altered.

2.2.1. Speech-in-Babble

The Speech-in-Babble (SinB) test [104,112] is administered monaurally. It includes two different lists of 50 phonetically balanced bisyllabic Greek words presented in background multi-talker babble recorded at the university cafeteria during rush hour. Each word is preceded by a carrier phrase (“pite tin lexi,” i.e., “say the word”) and participants are instructed to repeat the word heard after each trial. Five signal-to-noise ratios (SNR) [+7, +5, +3, +1, and −1] are used and each SNR is applied to ten words in each list. SNRs at which 50% of the items are correctly identified are calculated using the Spearman–Karber formula [112]. Poorer performance is reflected in higher scores (measured in dB SNR).
For each participant, two scores were calculated for each ear based on one administration of the SinB. Word-based scores SinB_RE_words and SinB_LE_words (for right and left ears respectively) and syllable-based scores SinB_RE and SinB_LE (for right and left ears respectively). For word-based scoring, correctly identified bisyllabic words are provided by the number of items in the Spearman–Karber formula. If the participant repeats only one of the two syllables presented, that word is scored as incorrect. For syllable-based scoring, the number of items in the Spearman–Karber formula is based on the number of correctly identified individual syllables. Therefore, if one syllable of the bisyllabic word is correctly recognized, the particular syllable is scored as correct and the non-recognized syllable as incorrect.

2.2.2. Gaps-in-Noise

The Gaps-in-Noise (GIN) test [50,106] is administered monaurally with a different list of approximately 30 trials for each ear. A practice session of 10 trials is given before the main test. Each trial consists of 6 s of white noise with a 5 s inter-trial interval. Each broadband noise segment contains 0 to 3 gaps (silent intervals), the location of which varies. The duration of the gaps is: 2, 3, 5, 6, 8, 10, 12, 15, and 20 msec and each gap is presented six times during the test. Participants are told to indicate when they detect a gap by knocking their hands on the table. The gap detection threshold is calculated per ear as the shortest gap duration detected on at least four out of six gaps with consistent results for larger gaps. False positives are noted and subtracted from the correct responses as follows: total score = (total number of correct responses- false positives)/the number of trials in the list.

2.2.3. Digit Span

Digit Span is a working memory test that involves the evaluation of both short-term memory and working memory binaurally. It consists of two different subtests with an increasing number of digits in a set of two per the same number of digits. In the first one participants have to repeat digits heard following each trial, the start off is with two digits. On the second one participants have to repeat digits heard in a backward manner (e.g., “2-9-4-6” is heard and the correct answer is “6-4-9-2”). Testing is ceased when a participant gives an incorrect answer on two trials of the same length or when all trials are exhausted. The test’s result is measured by adding the number of items correctly identified for each subtest per subtest and in total.

2.2.4. Word Recognition–Rhythm Component (WRRC)

Word Recognition–Rhythm Component test [58,59] evaluates the speech in noise perception rhythm benefit. It uses three different lists of words in noise with a preceding 1 KHz beats of four sequences in each word. There are three conditions: Rhythm, Unsynchronized, and Non-Rhythm (RH, UnSc, and NR respectively). Rhythm (RH) condition: The sequence used in the RH condition is isochronous and synchronized with the following word. Unsynchronized (UnSc) condition: The sequence used in the UnSc condition is isochronous and the word is not synchronized to it. Non-rhythm (NR) condition: The sequence used in the NR condition was not rhythmic (i.e., non-isochronous). To avoid learning effects that might potentially result in some kind of rhythm perception, several sequences were used in a cyclic order, i.e., first A, then B, then C, etc. The test’s result is measured by adding the number of items correctly identified per each condition (for syllables, 16 maximum; for words, 32 maximum).

2.2.5. Frequency Discrimination Limen

Frequency discrimination limen is assessed for three different frequency regions (500, 1000, and 2000 Hz). The frequency step varies from 2 Hz to 50 Hz with changes occurring every second between a standard (S) pure tone and a roving (R) pure tone (i.e., S-R-S…, etc.). Roving pure tones are randomized (Table 3). Stimuli are presented binaurally. Participants respond by knocking their hands on the table. The minimum frequency difference in Hz participants perceive is the reported threshold.

2.3. Statistical Analysis

All statistical analysis was conducted with SPSS Statistics 27.0, manufactured by IBM in New York. Results were evaluated for normal distribution by calculating the z values [113,114]. Tests executed were t-test, Mann–Whitney, Kruskal Wallis, ANOVA, and post-hoc analysis with Bonferroni correction depending on normal or non-normal distribution of variables. The significance level was set at 5% (i.e., p < 0.05).

3. Results

3.1. Word Recognition Rhythm Component

A one-way between-subjects ANOVA was executed to compare the effect of music experience, or its absence, on WRRC_RH2 for musicians of classical music, byzantine music, percussionists, and non-musicians. This variable was normally distributed.
A significant difference in WRRC_RH2 recognition for the four groups [F(3, 44) = 4.180, p = 0.011] was found. Post hoc comparisons using the Bonferroni correction indicated that the mean score of classical musicians (M = 12.42, SD = 1.56) was significantly better than that of byzantine musicians (M = 9.83, SD = 2.20), p = 0.007 (Table 4, Figure 1).
The analysis did not reveal any statistical differences between four groups of musicians for the WRRC_RH1 [H(3) = 5.66, p = 0.129], WRRC_UnSc1 [F(3, 44) = 1.92, p = 0.14], and WRRC_UnSc2 [F(3, 44) = 2.449, p = 0.076] scores.
For the Speech in Babble test (SinB_RE and SinB_LE, for right and left ear respectively), an ANOVA test was conducted, to compare the three groups of musicians and non-musicians. Results did not reveal a statistically significant difference among the four groups for SinB_RE [F(3, 44) = 0.672, p = 0.574], nor for SinB_LE [F(3, 44) = 0.599, p = 0.619] (Table 5).

3.2. Gaps in Noise

For the Gaps in Noise test (GIN_RE and GIN_LE, for right and left ear respectively), an ANOVA test was conducted, to compare the four groups. Results did not reveal a statistically significant difference for GIN_RE [F(3, 44) = 0.516, p = 0.673], nor for GIN_LE [F(3, 44) = 0.248, p = 0.863] (Table 6).

3.3. Digit Span

For the Digit Span Forward (DigitF) test or the Digit Span Backwards (DigitB) test there was no statistically significant difference between musical specialization groups [F(3,44) = 0.709, p = 0.552] and [H(3) = 5.49, p = 0.13], respectively.

3.4. Frequency Discrimination Limen

An ANOVA test was conducted for Frequency Discrimination Limen at 500 Hz (Freq_d_500) and did not reveal a statistically significant difference between groups (p = 0.678). As the criteria for normal distribution were not satisfied, the Kruskal Wallis test was administered for Frequency Discrimination Limen at 1000 Hz (Freq_d_1000) and 2000 Hz (Freq_d_2000). A statistically significant difference was not revealed for Freq_d_1000 [H(3) = 3.74, p = 0.29] between four groups of participants. Comparisons of frequency discrimination across groups found that the three groups of musicians had lower thresholds (better discrimination) at a statistically significant level [H(3) = 11.28, p = 0.010] for the 2000 Hz frequency region compared to non-musicians, as shown in Figure 2. Byzantine musicians achieved a better (lower) threshold (M = 3.17, SD = 2.44, p = 0.002), following classical musicians (M = 3.67, SD = 2.38, p = 0.017) and then, percussionists had better threshold (M = 4.42, SD = 3.45, p = 0.015) than the non-musicians (M = 6.50, SD = 3.09).

3.5. Word Recognition Rhythm Component for Musicians and Non-Musicians

A Mann-Whitney U test indicated that WRRC_RH1 scores were significantly greater for 36 musicians (Mdn = 27.15) than for 12 non-musicians (Mdn = 16.54), U = 120.5, p = 0.019 (Table 4).
Another two-sample t-test was performed to compare musicians (M = 13.56, SD = 1.297) to non-musicians (M = 12.58, SD = 1.379). Musicians showed significantly better word recognition [t(46) = 2.214, p = 0.032, r = 0.34), 95% CI (0.09, 1.86)], for the WRRC_UnSc1.

3.6. Digit Span for Musicians and Non-Musicians

A statistically significant difference was not revealed for the Digit Span Forward (DigitF) test between musicians (M = 10.44, SD = 2.04) and non-musicians (M = 10.00, SD = 2.132); t(46) = 0.64, p = 0.522).
A Mann-Whitney U test was conducted to determine whether there is a difference in Digit Span Backwards (DigitB) scores between musicians (Mdn = 8.00, SD = 2.28) and non-musicians (Mdn = 6.50, SD = 2.02). The results indicate a significant difference between groups (U = 123.0, p = 0.025), with a working memory advantage for musicians. Overall, we conclude that there is a difference in working memory (DigitB) score between musicians and non-musicians, as shown in Figure 3.

3.7. Frequency Discrimination Limen for Musicians and Non-Musicians

A Mann-Whitney test for the musicians and non-musicians groups indicated significantly better frequency discrimination (U = 82.50, p = 0.001) for musicians in general (M = 3.75, SD = 2.77) compared to non-musicians (M = 6.50, SD = 3.09).

4. Discussion

This study’s primary aim was to assess auditory processing among musicians of three different styles, Byzantine music, Western classical music (melodic instruments), and percussion (tuned and untuned). Any documented differences might provide insight into individualized rehabilitation through specific styles of musical education and training. A secondary aim was to compare musicians with non-musicians in the Greek population and verify the musicians’ advantage in auditory skills supported in previous research [22,52,65,115,116].
The present study shows a Western classical musicians’ advantage in speech in noise recognition of the second syllable with a good use of the rhythm effect, compared to Byzantine musicians. This result is novel as there are no other studies researching auditory processing in Byzantine chanters. It provides information on different levels of improvements regarding auditory processing as a result of music education depending on the style of music. The clinical implications of this result are more in favor of a specific tailored auditory training as a rehabilitation tool, as opposed to a one, fits all approach that is the usual case with software available auditory training.
Musicians’ better performance in various auditory processing components was verified. They were better in word-in- noise recognition in all three conditions (rhythmic, unsynchronized, and non-rhythmic) in the WRRC test with the Western Classical musicians being the best compared to the other two groups in the non-rhythmic component. The results of the present study show that the enhanced performance of speech in noise perception in the WRRC test is most probably not due to a rhythm advantage but due to the musicians’ ability to not be easily distracted by other asynchronous stimuli.
Byzantine chanters were found to be better at the Frequency Discrimination Threshold for 2000 Hz compared to the other two groups of musicians. This result is novel as it concerns Byzantine music, and it may be due to the different tuning system compared to Western music with each semitone corresponding to 6 moria subdivides. Of interest, previous research revealed that vocalists have better frequency discrimination than non-musicians [117]. Byzantine chanters also had a better working short-term memory for the backward subtest compared to the other musicians.
However, for speech recognition in the SinB test, as well as, for temporal resolution in the GIN test, there was no significant difference among musicians from the three different styles and non-musicians. The non-musicians’ control group displayed exceptionally good results comparable to other studies on musicians’ outcomes [60].
Results on the WRRC test state that musicians can perceive speech in noise in any condition thus not supporting a specific rhythm effect. They appear to be better at an untrained auditory processing component regardless of the presence of rhythm. However, the advantage of classical musicians compared to the other group of musicians in this study for the second syllable discrimination in noise indicates that there is a rhythm effect benefit for this group of musicians. Enhanced frequency discrimination in musicians revealed in the present study is in accordance with recent research highlighting the presence of a correlation between frequency precision and Goldsmiths Musical Sophistication Index that is specific to the auditory domain and is unrelated to vision or amplitude modulation [8]. The 2000 Hz specific frequency discrimination advantage shown by Byzantine chanters in our study should be further investigated in a larger sample. The working memory advantage documented in the more difficult part of the digit span test (backward subtest) is expected when comparing musicians versus non-musicians.
The fact that musical practice takes many forms, and it is yet unknown which specific elements of musical experience or expertise may direct speech perception advantages, may be a potential contributing component in the mixed experimental outcomes with musician versus non-musician comparisons. Even within categories such as classical or jazz performance, there is great diversity in instruction methods (e.g., learning to play from a score vs. learning to play by ear) that may influence the development of specific aspects of musical competence, such as rhythm perception and auditory memory [52].
Moreover, there is a difference observed in beat alignment and the time spent on musical practice [118]. Studies in musicians have shown that the more years a person is intensively engaged in music, the more areas of the brain are involved in perceiving, analyzing, and recording it, and the more neural networks develop to convey the language of music to wider areas of the brain and make it “musically driven”. Music contributes to the development of many skills and to the activation of many brain centers, which are associated with cognitive functions [98].
Slater et al. [52,119] attempted to estimate which specific rhythmic skills are associated with speech perception in noise, and whether these relationships extend to measures of rhythm production as well as perception. In that research, percussionists outperformed non-musicians, apart from speech-in-noise perception, on sequence- and beat-based drumming tasks. The speech-in-noise perception was correlated with the two sequence-based tasks (drumming to metrical and jittered sequences) [119]. Percussionists and singers did not differ in their performance on the musical competence test (rhythm or melody subtests), speech-in-noise perception (words or sentences), or auditory working memory [52]. Cognitive factors such as memory intervene unquestionably in the relationships between musical skill and hearing speech in noise [15]. The ability to perceive words in noise (WIN) did not relate to either rhythmic or melodic competence, nor to working memory competence [52]. Interestingly, recent studies substantiated enhanced pitch perception in musicians and melodic discrimination yet did not detect any advantage for speech-in-noise comprehension [52,57].
Although the present study cannot speak to the precise effects of music training in distinct musical styles, our cross-sectional findings provide a basis for further investigation into the potential for music training to reinforce auditory processing skills and building blocks of communication. In parallel, music therapy is the clinical application of music to treat disease in individuals who can benefit from music and thereby improve their quality of life. It is known since the Byzantine era when many of the hospitals of Constantinople applied music therapy to neurological patients [120]. Music therapy is a really pleasant and painless therapeutic method, usually practiced by specialized therapists, who have the appropriate knowledge and experience [121]. Therefore, potentially, we could suggest that for (C)APD cases music training, or music therapy, apart from auditory training, could be a pleasant and effective way to sharpen auditory skills.
Further research ideas have to do with seeking differences in Frequency Discrimination Limen among Byzantine chanters and other musical genres. Moreover, a fertile area of research could involve Byzantine chanters, other vocalists, or musicians specialized in other musical types of instruments (e.g., woodwinds, brass, strings), as well as their relation to speech in noise comprehension. Likewise, studies applying music training for auditory processing, setting age limits, and hearing sensitivity directions, could supplement recommendations, as a way to detect the best approach to train auditory processing disorders. Moreover, examinations with non-psychometric tests, like fMRI, would be an interesting extension of the research and its results.
Among the limitations of the present study is the absence of data on the educational profiles of participants. Secondly, although there was no statistical evidence for participants’ mean age, a slight difference in age among the four groups, could be the reason for the absence of differentiation among the three musical groups, or from musicians to non-musicians, in speech in noise comprehension, according to previous research [122,123]. Our subjects were also not required to have pure-tone thresholds better than 20 dB at the audiometric test or fill a questionnaire to assess the impact of their hearing impairment [124], whereas normal hearing was necessary for Varnet et al. [125] as musicians are more likely to experience hearing problems [124,125,126]. Interestingly, out of the three musicians with abnormal pure tone thresholds in one or two high frequencies, only one has an average pure tone threshold of more than 20 dB, which is considered normal. It should be noted that the musician with an average hearing threshold of 20 dB, exceeds it by less than 2 dB HL. As the average pure tone threshold of audiometric tested frequencies is used for auditory processing evaluation determination of needed adjustments, there was no need for any adjustment of intensity when administering the auditory processing test battery for any of the participants in this study. Finally, participants were not matched by gender and musical instruments, for musicians. A possible female auditory processing advantage [127] could not explain the reported better performance in musicians in the present study as our musicians’ groups include more men than women compared to the control group.

5. Conclusions

The study reveals different auditory processing elements enhancement as a result of different training in different musical styles. Neuroplasticity seems to be specific while extrapolating to non-trained elements of auditory processing such as speech in noise perception. This sets the basis for individualized auditory training in individuals with APD.

Author Contributions

M.K., G.P., V.M.I. and C.S. conceived and designed the study. M.K. collected the data and wrote the first draft of the manuscript. M.K. performed the statistical analysis. G.P., V.M.I. and C.S. critically revised the manuscript at all stages of its being written. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was approved by the Aristotle University of Thessaloniki Ethics and Bioethics Committee (protocol number: 6.613).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patients to publish this paper.

Data Availability Statement

The data presented in this study are available on request from the first author.

Acknowledgments

The study and data collection took place at the Clinical Psychoacoustics Lab of the Medical School of Aristotle University of Thessaloniki. The author is grateful to all participants in this study.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

APD: auditory processing disorder, SNR: signal-to-noise ratio, dB: decibel, RE: right ear, LE: left ear, BM: Byzantine music, BC: Byzantine chant.

References

  1. Chatterjee, D.; Hegde, S.; Thaut, M. Neural Plasticity: The Substratum of Music-Based Interventions in Neurorehabilitation. NeuroRehabilitation 2021, 48, 155–166. [Google Scholar] [CrossRef]
  2. Bamiou, D.-E. Aetiology and Clinical Presentations of Auditory Processing Disorders—A Review. Arch. Dis. Child. 2001, 85, 361–365. [Google Scholar] [CrossRef]
  3. Chermak, G.D.; Lee, J. Comparison of Children’s Performance on Four Tests of Temporal Resolution. J. Am. Acad. Audiol. 2005, 16, 554–563. [Google Scholar] [CrossRef] [PubMed]
  4. Rudner, M.; Rönnberg, J.; Lunner, T. Working Memory Supports Listening in Noise for Persons with Hearing Impairment. J. Am. Acad. Audiol. 2011, 22, 156–167. [Google Scholar] [CrossRef] [PubMed]
  5. Iliadou, V.; Bamiou, D.E.; Sidiras, C.; Moschopoulos, N.P.; Tsolaki, M.; Nimatoudis, I.; Chermak, G.D. The Use of the Gaps-in-Noise Test as an Index of the Enhanced Left Temporal Cortical Thinning Associated with the Transition between Mild Cognitive Impairment and Alzheimer’s Disease. J. Am. Acad. Audiol. 2017, 28, 463–471. [Google Scholar] [CrossRef]
  6. Quinto, L.; Thompson, W.F.; Keating, F.L. Emotional Communication in Speech and Music: The Role of Melodic and Rhythmic Contrasts. Front. Psychol. 2013, 4, 184. [Google Scholar] [CrossRef] [Green Version]
  7. Schlaug, G.; Norton, A.; Overy, K.; Winner, E. Effects of Music Training on the Child’s Brain and Cognitive Development. Ann. N. Y. Acad. Sci. 2005, 1060, 219–230. [Google Scholar] [CrossRef] [Green Version]
  8. Lad, M.; Billig, A.J.; Kumar, S.; Griffiths, T.D. A Specific Relationship between Musical Sophistication and Auditory Working Memory. Sci. Rep. 2022, 12, 3517. [Google Scholar] [CrossRef] [PubMed]
  9. Kraus, N.; Chandrasekaran, B. Music Training for the Development of Auditory Skills. Nat. Rev. Neurosci. 2010, 11, 599–605. [Google Scholar] [CrossRef]
  10. Kraus, N.; Skoe, E.; Parbery-Clark, A.; Ashley, R. Experience-Induced Malleability in Neural Encoding of Pitch, Timbre, and Timing: Implications for Language and Music. In Annals of the New York Academy of Sciences; Blackwell Publishing Inc.: Hoboken, NJ, USA, 2009; Volume 1169, pp. 543–557. [Google Scholar] [CrossRef] [Green Version]
  11. Koelsch, S.; Schröger, E.; Tervaniemi, M. Superior Pre-Attentive Auditory Processing in Musicians. Neuroreport 1999, 10, 1309–1313. [Google Scholar] [CrossRef]
  12. Strait, D.L.; Kraus, N. Biological Impact of Auditory Expertise across the Life Span: Musicians as a Model of Auditory Learning. Hear. Res. 2014, 308, 109–121. [Google Scholar] [CrossRef] [Green Version]
  13. Tervaniemi, M.; Just, V.; Koelsch, S.; Widmann, A.; Schroger, E. Pitch Discrimination Accuracy in Musicians vs Non-musicians: An Event-Related Potential and Behavioral Study. Exp. Brain Res. 2005, 161, 1–10. [Google Scholar] [CrossRef]
  14. Forgeard, M.; Winner, E.; Norton, A.; Schlaug, G. Practicing a Musical Instrument in Childhood Is Associated with Enhanced Verbal Ability and Nonverbal Reasoning. PLoS ONE 2008, 3, e3566. [Google Scholar] [CrossRef]
  15. Kraus, N.; Strait, D.L.; Parbery-Clark, A. Cognitive Factors Shape Brain Networks for Auditory Skills: Spotlight on Auditory Working Memory. Ann. N. Y. Acad. Sci. 2012, 1252, 100–107. [Google Scholar] [CrossRef] [Green Version]
  16. Slater, J.; Skoe, E.; Strait, D.L.; O’Connell, S.; Thompson, E.; Kraus, N. Music Training Improves Speech-in-Noise Perception: Longitudinal Evidence from a Community-Based Music Program. Behav. Brain Res. 2015, 291, 244–252. [Google Scholar] [CrossRef] [PubMed]
  17. Tzounopoulos, T.; Kraus, N. Learning to Encode Timing: Mechanisms of Plasticity in the Auditory Brainstem. Neuron 2009, 62, 463–469. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Margulis, E.H.; Mlsna, L.M.; Uppunda, A.K.; Parrish, T.B.; Wong, P.C.M. Selective Neurophysiologic Responses to Music in Instrumentalists with Different Listening Biographies. Hum. Brain Mapp. 2009, 30, 267–275. [Google Scholar] [CrossRef]
  19. Pantev, C.; Roberts, L.E.; Schulz, M.; Engelien, A.; Ross, B. Timbre-Specific Enhancement of Auditory Cortical Representations in Musicians. Neuroreport 2001, 12, 169–174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Strait, D.L.; Chan, K.; Ashley, R.; Kraus, N. Specialization among the Specialized: Auditory Brainstem Function Is Tuned in to Timbre. Cortex 2012, 48, 360–362. [Google Scholar] [CrossRef] [PubMed]
  21. Besson, M.; Chobert, J.; Marie, C. Transfer of Training between Music and Speech: Common Processing, Attention, and Memory. Front. Psychol. 2011, 2, 94. [Google Scholar] [CrossRef] [Green Version]
  22. Parbery-Clark, A.; Skoe, E.; Lam, C.; Kraus, N. Musician Enhancement for Speech-In-Noise. Ear Hear. 2009, 30, 653–661. [Google Scholar] [CrossRef]
  23. Patel, A.D. Why Would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis. Front. Psychol. 2011, 2, 142. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Patel, A.D.; Iversen, J.R. The Linguistic Benefits of Musical Abilities. Trends Cogn. Sci. 2007, 11, 369–372. [Google Scholar] [CrossRef] [PubMed]
  25. Strait, D.L.; Parbery-Clark, A.; Hittner, E.; Kraus, N. Musical Training during Early Childhood Enhances the Neural Encoding of Speech in Noise. Brain Lang. 2012, 123, 191–201. [Google Scholar] [CrossRef] [Green Version]
  26. Strait, D.L.; Kraus, N.; Skoe, E.; Ashley, R. Musical Experience and Neural Efficiency—Effects of Training on Subcortical Processing of Vocal Expressions of Emotion. Eur. J. Neurosci. 2009, 29, 661–668. [Google Scholar] [CrossRef] [PubMed]
  27. Patel, A.D. Can Nonlinguistic Musical Training Change the Way the Brain Processes Speech? The Expanded OPERA Hypothesis. Hear. Res. 2014, 308, 98–108. [Google Scholar] [CrossRef] [PubMed]
  28. Elbert, T.; Pantev, C.; Wienbruch, C.; Rockstroh, B.; Taub, E. Increased Cortical Representation of the Fingers of the Left Hand in String Players. Science 1995, 270, 305–307. [Google Scholar] [CrossRef] [Green Version]
  29. Vuust, P.; Brattico, E.; Seppänen, M.; Näätänen, R.; Tervaniemi, M. Practiced Musical Style Shapes Auditory Skills. Ann. N. Y. Acad. Sci. 2012, 1252, 139–146. [Google Scholar] [CrossRef]
  30. Vuust, P.; Brattico, E.; Seppänen, M.; Näätänen, R.; Tervaniemi, M. The Sound of Music: Differentiating Musicians Using a Fast, Musical Multi-Feature Mismatch Negativity Paradigm. Neuropsychologia 2012, 50, 1432–1443. [Google Scholar] [CrossRef]
  31. Medwetsky, L. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention. Lang. Speech Hear. Serv. Sch. 2011, 42, 286–296. [Google Scholar] [CrossRef]
  32. American Speech-Language-Hearing Association. Central auditory processing: Current status of research and implications for clinical practice. Am. J. Audiol. 1996, 5, 41–54. [Google Scholar] [CrossRef]
  33. American Speech-Language-Hearing Association. (Central) Auditory Processing Disorders—The Role of the Audiologist; American Speech-Language-Hearing Association: Rockville, MD, USA, 2005. [Google Scholar] [CrossRef]
  34. Monteiro, A.R.M.; Nascimento, M.F.; Soares, D.C.; Ferreira, I.M.D.D.C. Temporal Resolution Abilities in Musicians and No Musicians Violinists Habilidades de Resolução Temporal Em Músicos Violinistas e Não Músicos. Int. Arch. Otorhinolaryngol. 2010, 14, 302–308. [Google Scholar]
  35. Samelli, A.G.; Schochat, E. The Gaps-in-Noise Test: Gap Detection Thresholds in Normal-Hearing Young Adults. Int. J. Audiol. 2008, 47, 238–245. [Google Scholar] [CrossRef]
  36. Phillips, D.P. Central Auditory System and Central Auditory Processing Disorders: Some Conceptual Issues; Thieme Medical Publishers, Inc.: New York, NY, USA, 2002; Volume 23. [Google Scholar]
  37. Chermak, G.D.; Musiek, F.E. Central Auditory Processing Disorders: New Perspectives; Singular Pub Group: San Diego, CA, USA, 1997. [Google Scholar]
  38. Gelfand, S.A. Hearing: An Introduction to Psychological and Physiological Acoustics; Informa Healthcare: London, UK, 2010. [Google Scholar]
  39. Griffiths, T.D.; Warren, J.D. The Planum Temporale as a Computational Hub. Trends Neurosci. 2002, 25, 348–353. [Google Scholar] [CrossRef]
  40. Hautus, M.J.; Setchell, G.J.; Waldie, K.E.; Kirk, I.J. Age-Related Improvements in Auditory Temporal Resolution in Reading-Impaired Children. Dyslexia 2003, 9, 37–45. [Google Scholar] [CrossRef]
  41. Walker, M.M.; Shinn, J.B.; Cranford, J.L.; Givens, G.D.; Holbert, D. Auditory Temporal Processing Performance of Young Adults with Reading Disorders. J. Speech Lang. Hear. Res. 2002, 45, 598–605. [Google Scholar] [CrossRef]
  42. Rance, G.; McKay, C.; Grayden, D. Perceptual Characterization of Children with Auditory Neuropathy. Ear Hear. 2004, 25, 34–46. [Google Scholar] [CrossRef]
  43. Fingelkurts, A.A.; Fingelkurts, A.A. Timing in Cognition and EEG Brain Dynamics: Discreteness versus Continuity. Cogn. Process. 2006, 7, 135–162. [Google Scholar] [CrossRef] [PubMed]
  44. Bao, Y.; Szymaszek, A.; Wang, X.; Oron, A.; Pöppel, E.; Szelag, E. Temporal Order Perception of Auditory Stimuli Is Selectively Modified by Tonal and Non-Tonal Language Environments. Cognition 2013, 129, 579–585. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Grube, M.; Kumar, S.; Cooper, F.E.; Turton, S.; Griffiths, T.D. Auditory Sequence Analysis and Phonological Skill. Proc. R. Soc. B Biol. Sci. 2012, 279, 4496–4504. [Google Scholar] [CrossRef] [PubMed]
  46. Grube, M.; Cooper, F.E.; Griffiths, T.D. Auditory Temporal-Regularity Processing Correlates with Language and Literacy Skill in Early Adulthood. Cogn. Neurosci. 2013, 4, 225–230. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Iliadou, V.; Ptok, M.; Grech, H.; Pedersen, E.R.; Brechmann, A.; Deggouj, N.; Kiese-Himmel, C.; Sliwinska-Kowalska, M.; Nickisch, A.; Demanez, L.; et al. A European Perspective on Auditory Processing Disorder-Current Knowledge and Future Research Focus. Front. Neurol. 2017, 8, 622. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Musiek, F.E.; Shinn, J.; Chermak, G.D.; Bamiou, D.-E. Perspectives on the Pure-Tone Audiogram. J. Am. Acad. Audiol. 2017, 28, 655–671. [Google Scholar] [CrossRef] [PubMed]
  49. Musiek, F.E.; Baran, J.A.; James Bellis, T.; Chermak, G.D.; Hall, J.W., III; Professor, C.; Keith, R.W.; Medwetsky, L.; Loftus West, K.; Young, M.; et al. American Academy of Audiology Clinical Practice Guidelines Guidelines for the Diagnosis, Treatment and Management of Children and Adults with Central Auditory Processing Disorder; American Academy of Audiology: Reston, VA, USA, 2010. [Google Scholar]
  50. Musiek, F.E.; Shinn, J.B.; Jirsa, R.; Bamiou, D.-E.; Baran, J.A.; Zaidan, E. GIN (Gaps-In-Noise) Test Performance in Subjects with Confirmed Central Auditory Nervous System Involvement. Ear Hear. 2005, 26, 608–618. [Google Scholar] [CrossRef]
  51. Gilley, P.M.; Sharma, M.; Purdy, S.C. Oscillatory Decoupling Differentiates Auditory Encoding Deficits in Children with Listening Problems. Clin. Neurophysiol. 2016, 127, 1618–1628. [Google Scholar] [CrossRef]
  52. Slater, J.; Kraus, N. The Role of Rhythm in Perceiving Speech in Noise: A Comparison of Percussionists, Vocalists and Non-Musicians. Cogn. Process. 2016, 17, 79–87. [Google Scholar] [CrossRef] [Green Version]
  53. Coffey, E.B.J.; Mogilever, N.B.; Zatorre, R.J. Speech-in-Noise Perception in Musicians: A Review. Hear. Res. 2017, 352, 49–69. [Google Scholar] [CrossRef]
  54. Hennessy, S.; Mack, W.J.; Habibi, A. Speech-in-noise Perception in Musicians and Non-musicians: A Multi-level Meta-Analysis. Hear. Res. 2022, 416, 108442. [Google Scholar] [CrossRef]
  55. Boebinger, D.; Evans, S.; Rosen, S.; Lima, C.F.; Manly, T.; Scott, S.K. Musicians and Non-Musicians Are Equally Adept at Perceiving Masked Speech. J. Acoust. Soc. Am. 2015, 137, 378–387. [Google Scholar] [CrossRef] [Green Version]
  56. Fuller, C.D.; Galvin, J.J.; Maat, B.; Free, R.H.; Başkent, D. The Musician Effect: Does It Persist under Degraded Pitch Conditions of Cochlear Implant Simulations? Front. Neurosci. 2014, 8, 179. [Google Scholar] [CrossRef]
  57. Ruggles, D.R.; Freyman, R.L.; Oxenham, A.J. Influence of Musical Training on Understanding Voiced and Whispered Speech in Noise. PLoS ONE 2014, 9, e86980. [Google Scholar] [CrossRef]
  58. Sidiras, C.; Iliadou, V.; Nimatoudis, I.; Reichenbach, T.; Bamiou, D.E. Spoken Word Recognition Enhancement Due to Preceding Synchronized Beats Compared to Unsynchronized or Unrhythmic Beats. Front. Neurosci. 2017, 11, 415. [Google Scholar] [CrossRef] [Green Version]
  59. Sidiras, C.; Iliadou, V.V.; Nimatoudis, I.; Bamiou, D.E. Absence of Rhythm Benefit on Speech in Noise Recognition in Children Diagnosed with Auditory Processing Disorder. Front. Neurosci. 2020, 14, 418. [Google Scholar] [CrossRef] [PubMed]
  60. Iliadou, V.; Bamiou, D.E.; Chermak, G.D.; Nimatoudis, I. Comparison of Two Tests of Auditory Temporal Resolution in Children with Central Auditory Processing Disorder, Adults with Psychosis, and Adult Professional Musicians. Int. J. Audiol. 2014, 53, 507–513. [Google Scholar] [CrossRef] [PubMed]
  61. Elangovan, S.; Stuart, A. Natural Boundaries in Gap Detection Are Related to Categorical Perception of Stop Consonants. Ear Hear. 2008, 29, 761–774. [Google Scholar] [CrossRef]
  62. Keith, R. Random Gap Detection Test; Auditec: St. Louis, MO, USA, 2000. [Google Scholar]
  63. Rammsayer, T.H.; Buttkus, F.; Altenmüller, E. Musicians Do Better than Non-musicians in Both Auditory and Visual Timing Tasks. Music Percept. 2012, 30, 85–96. [Google Scholar] [CrossRef]
  64. Donai, J.J.; Jennings, M.B. Gaps-in-Noise Detection and Gender Identification from Noise-Vocoded Vowel Segments: Comparing Performance of Active Musicians to Non-Musicians. J. Acoust. Soc. Am. 2016, 139, EL128–EL134. [Google Scholar] [CrossRef] [Green Version]
  65. Kumar, P.; Sanju, H.; Nikhil, J. Temporal Resolution and Active Auditory Discrimination Skill in Vocal Musicians. Int. Arch. Otorhinolaryngol. 2015, 20, 310–314. [Google Scholar] [CrossRef] [Green Version]
  66. Rammsayer, T.; Altenmüller, E. Temporal Information Processing in Musicians and Non-musicians. Music Percept. 2006, 24, 37–48. [Google Scholar] [CrossRef]
  67. Van Ryn, F., Jr.; Lüders, D.; Casali, R.L.; Amaral, M.I.R.D. Temporal Auditory Processing in People Exposed to Musical Instrument Practice. Codas 2022, 34, e20210256. [Google Scholar] [CrossRef]
  68. Sangamanatha, V.A.; Bhat, J.; Srivastava, M. Temporal Resolution in Individuals with and without Musical Training Perception of Spectral Ripples and Speech Perception in Noise by Older Adults View Project. 2012. Available online: https://www.researchgate.net/publication/230822432 (accessed on 22 February 2023).
  69. Tervaniemi, M.; Janhunen, L.; Kruck, S.; Putkinen, V.; Huotilainen, M. Auditory Profiles of Classical, Jazz, and Rock Musicians: Genre-Specific Sensitivity to Musical Sound Features. Front. Psychol. 2016, 6, 713. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Nascimento, F.; Monteiro, R.; Soares, C.; Ferreira, M. Temporal Sequencing Abilities in Musicians Violinists and Non-Musicians. Arq. Int. Otorrinolaringol. 2014, 14, 217–224. [Google Scholar] [CrossRef] [Green Version]
  71. Brandler, S.; Rammsayer, T.H. Differences in Mental Abilities between Musicians and Non-Musicians. Psychol. Music 2003, 31, 123–138. [Google Scholar] [CrossRef]
  72. Chan, A.S.; Ho, Y.-C.; Cheung, M.-C. Music Training Improves Verbal Memory. Nature 1998, 396, 128. [Google Scholar] [CrossRef]
  73. Franklin, M.S.; Sledge Moore, K.; Yip, C.-Y.; Jonides, J.; Rattray, K.; Moher, J. The Effects of Musical Training on Verbal Memory. Psychol. Music 2008, 36, 353–365. [Google Scholar] [CrossRef] [Green Version]
  74. George, E.M.; Coch, D. Music Training and Working Memory: An ERP Study. Neuropsychologia 2011, 49, 1083–1094. [Google Scholar] [CrossRef]
  75. Hallam, S.; Himonides, E. The Power of Music; Open Book Publishers: Cambridge, UK, 2022. [Google Scholar] [CrossRef]
  76. Hansen, M.; Wallentin, M.; Vuust, P. Working Memory and Musical Competence of Musicians and Non-Musicians. Psychol. Music 2013, 41, 779–793. [Google Scholar] [CrossRef]
  77. Jakobson, L.S.; Lewycky, S.T.; Kilgour, A.R.; Stoesz, B.M. Memory for Verbal and Visual Material in Highly Trained Musicians. Music Percept. 2008, 26, 41–55. [Google Scholar] [CrossRef]
  78. Lee, Y.; Lu, M.; Ko, H. Effects of Skill Training on Working Memory Capacity. Learn. Instr. 2007, 17, 336–344. [Google Scholar] [CrossRef]
  79. Pallesen, K.J.; Brattico, E.; Bailey, C.J.; Korvenoja, A.; Koivisto, J.; Gjedde, A.; Carlson, S. Cognitive Control in Auditory Working Memory Is Enhanced in Musicians. PLoS ONE 2010, 5, e11120. [Google Scholar] [CrossRef] [Green Version]
  80. Parbery-Clark, A.; Strait, D.L.; Anderson, S.; Hittner, E.; Kraus, N. Musical Experience and the Aging Auditory System: Implications for Cognitive Abilities and Hearing Speech in Noise. PLoS ONE 2011, 6, e18082. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Talamini, F.; Carretti, B.; Grassi, M. The Working Memory of Musicians and Non-musicians. Music. Percept. 2016, 34, 183–191. [Google Scholar] [CrossRef]
  82. Talamini, F.; Altoè, G.; Carretti, B.; Grassi, M. Musicians Have Better Memory than Non-musicians: A Meta-Analysis. PLoS ONE 2017, 12, e0186773. [Google Scholar] [CrossRef]
  83. Taylor, A.C.; Dewhurst, S.A. Investigating the Influence of Music Training on Verbal Memory. Psychol. Music 2017, 45, 814–820. [Google Scholar] [CrossRef]
  84. Vasuki, P.R.M.; Sharma, M.; Demuth, K.; Arciuli, J. Musicians’ Edge: A Comparison of Auditory Processing, Cognitive Abilities and Statistical Learning. Hear. Res. 2016, 342, 112–123. [Google Scholar] [CrossRef]
  85. Wallentin, M.; Nielsen, A.H.; Friis-Olivarius, M.; Vuust, C.; Vuust, P. The Musical Ear Test, a New Reliable Test for Measuring Musical Competence. Learn. Individ. Differ. 2010, 20, 188–196. [Google Scholar] [CrossRef] [Green Version]
  86. Zuk, J.; Benjamin, C.; Kenyon, A.; Gaab, N. Behavioral and Neural Correlates of Executive Functioning in Musicians and Non-Musicians. PLoS ONE 2014, 9, e99868. [Google Scholar] [CrossRef]
  87. Belin, P. Voice Processing in Human and Non-Human Primates. Philos. Trans. R. Soc. B Biol. Sci. 2006, 361, 2091–2107. [Google Scholar] [CrossRef] [Green Version]
  88. Bianchi, F.; Santurette, S.; Wendt, D.; Dau, T. Pitch Discrimination in Musicians and Non-Musicians: Effects of Harmonic Resolvability and Processing Effort. J. Assoc. Res. Otolaryngol. 2016, 17, 69–79. [Google Scholar] [CrossRef] [Green Version]
  89. Inabinet, D.; De La Cruz, J.; Cha, J.; Ng, K.; Musacchia, G. Diotic and Dichotic Mechanisms of Discrimination Threshold in Musicians and Non-Musicians. Brain Sci. 2021, 11, 1592. [Google Scholar] [CrossRef]
  90. Magne, C.; Schön, D.; Besson, M. Musician Children Detect Pitch Violations in Both Music and Language Better than Nonmusician Children: Behavioral and Electrophysiological Approaches. J. Cogn. Neurosci. 2006, 18, 199–211. [Google Scholar] [CrossRef] [PubMed]
  91. Micheyl, C.; Delhommeau, K.; Perrot, X.; Oxenham, A.J. Influence of Musical and Psychoacoustical Training on Pitch Discrimination. Hear. Res. 2006, 219, 36–47. [Google Scholar] [CrossRef] [PubMed]
  92. Musacchia, G.; Sams, M.; Skoe, E.; Kraus, N. Musicians Have Enhanced Subcortical Auditory and Audiovisual Processing of Speech and Music. Proc. Natl. Acad. Sci. USA 2007, 104, 15894–15898. [Google Scholar] [CrossRef] [PubMed]
  93. Toh, X.R.; Tan, S.H.; Wong, G.; Lau, F.; Wong, F.C.K. Enduring Musician Advantage among Former Musicians in Prosodic Pitch Perception. Sci. Rep. 2023, 13, 2657. [Google Scholar] [CrossRef] [PubMed]
  94. Tervaniemi, M.; Huotilainen, M.; Brattico, E. Melodic Multi-Feature Paradigm Reveals Auditory Profiles in Music-Sound Encoding. Front. Hum. Neurosci. 2014, 8, 496. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  95. Kishon-Rabin, L.; Amir, O.; Vexler, Y.; Zaltz, Y. Pitch Discrimination: Are Professional Musicians Better than Non-Musicians? J. Basic Clin. Physiol. Pharmacol. 2001, 12, 125–143. [Google Scholar] [CrossRef]
  96. Delviniotis, D.; Kouroupetroglou, G.; Theodoridis, S. Acoustic Analysis of Musical Intervals in Modern Byzantine Chant Scales. J. Acoust. Soc. Am. 2008, 124, EL262–EL269. [Google Scholar] [CrossRef] [Green Version]
  97. Wellesz, E. A History of Byzantine Music and Hymnography; Clarendon Press: Oxford, UK, 1961. [Google Scholar]
  98. Baloyianis, S. Psaltic art and the brain: The philosophy of the Byzantine music from the perspectives of the neurosciences. In Proceedings of the 1st International Interdisciplinary Musicological Conference, The Psaltic Art as an Autonomous Science: Scientific Branches—Related Scientific Fields—Interdisciplinary Collaborations and Interaction, Volos, Greece, 29 June–3 July 2014; Available online: https://speech.di.uoa.gr/IMC2014/ (accessed on 3 February 2023).
  99. Delviniotis, D.S. New Method of Byzantine Music (BM) Intervals’ Measuring and Its Application in the Fourth Mode. A New Approach of the Music Intervals’ Definition. In Proceedings of the MODUS-MODI_MODALITY International Musicological Conference, Nicosia, Cyprus, 6–10 September 2017; Volume 1. [Google Scholar] [CrossRef]
  100. Patriarchal Music Committee. Elementary Teaching of Ecclesiastical Music—Elaborated on the Base of the Psalter; Patriarchal Music Committee: Constantinople, Türkiye, 1881. [Google Scholar]
  101. Kypourgos, N. Some Observations on the Intervals of Greek and Eastern Music. Musicology 1985, 2, 83–93. [Google Scholar]
  102. Sundberg, J. The Acoustics of the Singing Voice. Sci. Am. 1977, 236, 82–91. [Google Scholar] [CrossRef]
  103. Dumont, E.; Syurina, E.V.; Feron, F.J.M.; van Hooren, S. Music Interventions and Child Development: A Critical Review and Further Directions. Front. Psychol. 2017, 8, 1694. [Google Scholar] [CrossRef] [Green Version]
  104. Iliadou, V.; Bamiou, D.-E.; Kaprinis, S.; Kandylis, D.; Kaprinis, G. Auditory Processing Disorders in Children Suspected of Learning Disabilities—A Need for Screening? Int. J. Pediatr. Otorhinolaryngol. 2009, 73, 1029–1034. [Google Scholar] [CrossRef] [PubMed]
  105. Sidiras, C.; Iliadou, V.; Chermak, G.D.; Nimatoudis, I. Assessment of Functional Hearing in Greek-Speaking Children Diagnosed with Central Auditory Processing Disorder. J. Am. Acad. Audiol. 2016, 27, 395–405. [Google Scholar] [CrossRef]
  106. Shinn, J.B.; Chermak, G.D.; Musiek, F.E. GIN (Gaps-In-Noise) Performance in the Pediatric Population. J. Am. Acad. Audiol. 2009, 20, 229–238. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  107. Wechsler, D. The Wechsler Adult Intelligence Scale-III.; Psychological Corporation: San Antonio, TX, USA, 1997. [Google Scholar]
  108. Flagge, A.G.; Neeley, M.E.; Davis, T.M.; Henbest, V.S. A Preliminary Exploration of Pitch Discrimination, Temporal Sequencing, and Prosodic Awareness Skills of Children Who Participate in Different School-Based Music Curricula. Brain Sci. 2021, 11, 982. [Google Scholar] [CrossRef] [PubMed]
  109. Sun, Y.; Lu, X.; Ho, H.T.; Thompson, W.F. Pitch Discrimination Associated with Phonological Awareness: Evidence from Congenital Amusia. Sci. Rep. 2017, 7, 44285. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Nelson, D.A.; Stanton, M.E.; Freyman, R.L. A General Equation Describing Frequency Discrimination as a Function of Frequency and Sensation Level. J. Acoust. Soc. Am. 1983, 73, 2117–2123. [Google Scholar] [CrossRef]
  111. Loutridis, S. Aκουστική: Aρχές Και Εφαρμογές; Tziola: Thessaloniki, Greece, 2015. [Google Scholar]
  112. Iliadou, V.; Fourakis, M.; Vakalos, A.; Hawks, J.W.; Kaprinis, G. Bi-Syllabic, Modern Greek Word Lists for Use in Word Recognition Tests. Int. J. Audiol. 2006, 45, 74–82. [Google Scholar] [CrossRef]
  113. Cramer, D.; Howitt, D. The SAGE Dictionary of Statistics; SAGE Publications, Ltd.: London, UK, 2004. [Google Scholar] [CrossRef]
  114. Doane, D.P.; Seward, L.E. Measuring Skewness: A Forgotten Statistic? J. Stat. Educ. 2011, 19, 2. [Google Scholar] [CrossRef]
  115. Du, Y.; Zatorre, R.J. Musical Training Sharpens and Bonds Ears and Tongue to Hear Speech Better. Proc. Natl. Acad. Sci. USA 2017, 114, 13579–13584. [Google Scholar] [CrossRef] [Green Version]
  116. Li, X.; Zatorre, R.J.; Du, Y. The Microstructural Plasticity of the Arcuate Fasciculus Undergirds Improved Speech in Noise Perception in Musicians. Cerebral Cortex 2021, 31, 3975–3985. [Google Scholar] [CrossRef]
  117. Slater, J.; Azem, A.; Nicol, T.; Swedenborg, B.; Kraus, N. Variations on the Theme of Musical Expertise: Cognitive and Sensory Processing in Percussionists, Vocalists and Non-Musicians. Eur. J. Neurosci. 2017, 45, 952–963. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  118. Spiech, C.; Endestad, T.; Laeng, B.; Danielsen, A.; Haghish, E.F. Beat Alignment Ability Is Associated with Formal Musical Training Not Current Music Playing. Front. Psychol. 2023, 14, 1034561. [Google Scholar] [CrossRef] [PubMed]
  119. Slater, J.; Kraus, N.; Woodruff Carr, K.; Tierney, A.; Azem, A.; Ashley, R. Speech-in-Noise Perception Is Linked to Rhythm Production Skills in Adult Percussionists and Non-Musicians. Lang. Cogn. Neurosci. 2018, 33, 710–717. [Google Scholar] [CrossRef] [Green Version]
  120. Baloyianis, J.S. The Neurosciences in Byzantium. ΕΓΚΕΦAΛOΣ 2012, 49, 34–46. (In Greek) [Google Scholar]
  121. Garred, R. Music as Therapy: A Dialogical Perspective; Barcelona Publishers: New Braunfels, TX, USA, 2006. [Google Scholar]
  122. Snell, K.B.; Frisina, D.R. Relationships among Age-Related Differences in Gap Detection and Word Recognition. J. Acoust. Soc. Am. 2000, 107, 1615–1626. [Google Scholar] [CrossRef]
  123. Snell, K.B.; Mapes, F.M.; Hickman, E.D.; Frisina, D.R. Word Recognition in Competing Babble and the Effects of Age, Temporal Processing, and Absolute Sensitivity. J. Acoust. Soc. Am. 2002, 112, 720–727. [Google Scholar] [CrossRef]
  124. Vardonikolaki, A.; Pavlopoulos, V.; Pastiadis, K.; Markatos, N.; Papathanasiou, I.; Papadelis, G.; Logiadis, M.; Bibas, A. Musicians’ Hearing Handicap Index: A New Questionnaire to Assess the Impact of Hearing Impairment in Musicians and Other Music Professionals. J. Speech Lang. Hear. Res. 2020, 63, 4219–4237. [Google Scholar] [CrossRef]
  125. Varnet, L.; Wang, T.; Peter, C.; Meunier, F.; Hoen, M. How Musical Expertise Shapes Speech Perception: Evidence from Auditory Classification Images. Sci. Rep. 2015, 5, 14489. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  126. Vardonikolaki, A.; Kikidis, D.; Iliadou, E.; Markatos, N.; Pastiadis, K.; Bibas, A. Audiological Findings in Professionals Exposed to Music and Their Relation with Tinnitus. Prog. Brain Res. 2021, 260, 327–353. [Google Scholar] [CrossRef] [PubMed]
  127. Krizman, J.; Rotondo, E.K.; Nicol, T.; Kraus, N.; Bieszczad, K.M. Sex Differences in Auditory Processing Vary across Estrous Cycle. Sci. Rep. 2021, 11, 22898. [Google Scholar] [CrossRef]
Figure 1. Boxplot of WRRC_RH2 by Specialization [F(3, 44) = 4.180, p = 0.007]. The median score of second syllable recognition in rhythmic conditions (RH2 scores) shows that classical musicians are better than byzantine chanters in using the rhythm effect to better perceive words in noise (specifically the second syllable of a word). WRRC_RH2 = recognized 2nd syllables in rhythm condition in WRRC test.
Figure 1. Boxplot of WRRC_RH2 by Specialization [F(3, 44) = 4.180, p = 0.007]. The median score of second syllable recognition in rhythmic conditions (RH2 scores) shows that classical musicians are better than byzantine chanters in using the rhythm effect to better perceive words in noise (specifically the second syllable of a word). WRRC_RH2 = recognized 2nd syllables in rhythm condition in WRRC test.
Healthcare 11 02027 g001
Figure 2. Boxplot of DFL_2000. The black lines indicate the median scores (in Hz) of Frequency Discrimination Limen at 2000 Hz for the musicians and non-musicians groups.
Figure 2. Boxplot of DFL_2000. The black lines indicate the median scores (in Hz) of Frequency Discrimination Limen at 2000 Hz for the musicians and non-musicians groups.
Healthcare 11 02027 g002
Figure 3. Boxplot of DigitB for musicians and non-musicians. The black lines indicate the median scores on Digit Span Backwards, according to musical engagement (U = 123.0, p = 0.025).
Figure 3. Boxplot of DigitB for musicians and non-musicians. The black lines indicate the median scores on Digit Span Backwards, according to musical engagement (U = 123.0, p = 0.025).
Healthcare 11 02027 g003
Table 1. Mean and SD for participants’ age. ANOVA did not reveal a statistically significant difference in the average age of the four categories of participants [F(3, 44) = 0.926, p = 0.436].
Table 1. Mean and SD for participants’ age. ANOVA did not reveal a statistically significant difference in the average age of the four categories of participants [F(3, 44) = 0.926, p = 0.436].
Age
Byzantine chanters39.17 (13.361)
Classical musicians37.92 (11.866)
Percussionists32.75 (10.635)
Non-musicians33.25 (10.678)
Total35.77 (11.661)
Table 2. Pure tone thresholds for 3 participants with mild high-frequency sensorineural hearing loss. Hearing Threshold (H. Thr.) for Left Ear (LE) and Right Ear (RE). The other frequency thresholds tested were normal, thus they are not presented in Table 2.
Table 2. Pure tone thresholds for 3 participants with mild high-frequency sensorineural hearing loss. Hearing Threshold (H. Thr.) for Left Ear (LE) and Right Ear (RE). The other frequency thresholds tested were normal, thus they are not presented in Table 2.
Frequency/Ear4 kHz LE8 kHz LEAverage
H. Thr. LE
4 kHz RE8 kHz REAverage
H. Thr. RE
Participant 140 dB40 dB16.7 dB45 dB40 dB21.7 dB
Participant 230 dB20 dB12.5 dB30 dB20 dB9.2 dB
Participant 345 dB0 dB7.5 dB45 dB20 dB13 dB
Table 3. Frequency Discrimination Limen procedure example, for three parts. “S” is for standard pure tone and “R” for roving pure tone.
Table 3. Frequency Discrimination Limen procedure example, for three parts. “S” is for standard pure tone and “R” for roving pure tone.
SRSRSRSRSR
500 Hz (part1)500530500510500504500520500501
1000 Hz (part2)1000100210001050100010201000100410001003
2000 Hz (part3)2000200520002020200020032000201020002001
Table 4. Mean scores (SDs in parenthesis) for each group on 1st (RH1) and 2nd (RH2) syllable of WRRC on rhythmic condition.
Table 4. Mean scores (SDs in parenthesis) for each group on 1st (RH1) and 2nd (RH2) syllable of WRRC on rhythmic condition.
RH1RH2
Classical musicians13.33 (1.30)12.42 (1.56)
Byzantine musicians13.17 (1.11)9.83 (2.20)
Percussionists13.17 (1.33)10.92 (1.37)
Non-musicians12.25 (.96)11.33 (1.96)
Table 5. Mean (SD) scores for Speech in Babble for the Right and Left ear, respectively.
Table 5. Mean (SD) scores for Speech in Babble for the Right and Left ear, respectively.
SinB_RESinB_LE
Classical musicians−0.18 (0.36)−1.23 (0.23)
Byzantine chanters−0.08 (0.53)−1.20 (0.19)
Percussionists−0.30 (0.37)−1.14 (0.28)
Non-musicians−0.28 (0.39)−1.11 (0.28)
Table 6. Mean (SD) scores in msec for the GIN test.
Table 6. Mean (SD) scores in msec for the GIN test.
GIN_REGIN_LE
Classical musicians5.33 (1.43)5.67 (1.23)
Byzantine chanters5.50 (1.31)5.83 (1.46)
Percussionists5.50 (1.67)5.58 (1.31)
Non-musicians4.92 (0.66)5.42 (0.66)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kyrtsoudi, M.; Sidiras, C.; Papadelis, G.; Iliadou, V.M. Auditory Processing in Musicians, a Cross-Sectional Study, as a Basis for Auditory Training Optimization. Healthcare 2023, 11, 2027. https://doi.org/10.3390/healthcare11142027

AMA Style

Kyrtsoudi M, Sidiras C, Papadelis G, Iliadou VM. Auditory Processing in Musicians, a Cross-Sectional Study, as a Basis for Auditory Training Optimization. Healthcare. 2023; 11(14):2027. https://doi.org/10.3390/healthcare11142027

Chicago/Turabian Style

Kyrtsoudi, Maria, Christos Sidiras, Georgios Papadelis, and Vasiliki Maria Iliadou. 2023. "Auditory Processing in Musicians, a Cross-Sectional Study, as a Basis for Auditory Training Optimization" Healthcare 11, no. 14: 2027. https://doi.org/10.3390/healthcare11142027

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop