The Influence of Vowels on the Identification of Spoken Disyllabic Words in the Malayalam Language for Individuals with Hearing Loss
Abstract
:1. Introduction
2. Experiment 1: Determine the Psychometric Characteristics
2.1. Methods
- Subjects: Forty-five adults aged 20–40 years (mean age = 32.46 years; S.D. = 3.17 years) with normal hearing participated in this experiment. Each participant underwent a comprehensive audiological evaluation. Otolaryngologic examinations confirmed the absence of abnormalities in all participants. Hearing thresholds were below 16 dB HL across all octave frequencies (250–8000 Hz) for all participants.
- Procedure: There are two phonemically balanced wordlists in Malayalam [6], each containing 25 meaningful disyllabic words (CVCV structure). Each wordlist was recorded, and the root mean square (RMS) levels were adjusted to 60 dB SPL. Two audio tracks were generated (two wordlists). A three-second interval was maintained between words in each audio track. The participants were randomly assigned to nine groups, each consisting of five individuals. Each group was presented with two randomly selected audio tracks, ensuring no track was repeated. The wordlists were played through the personal computer and routed via the calibrated diagnostic audiometer (Piano, Inventis Inc., Padova, Italy) at nine intensity levels (−10, −5, 0, +5, +10, +15, +2, +3, and +40 dB). Participants listened to the stimuli through Sennheiser HDA200 headphones (Sennheiser electronic, GmbH & Co., Wedemark, Germany). The participants listened to the words and repeated them aloud, with their responses audio-recorded for further analysis.
2.2. Results
2.3. Discussion
3. Experiment 2: Decrease in PBmax Scores with Increased Severity of Sensorineural Hearing Loss
3.1. Methods
- Subjects: A total of 1000 participants with varying degrees of hearing loss, referred for hearing aid treatment, were recruited for this study. The subjects’ ages ranged from 17 to 84 years (mean = 52 years, median = 68 years, SD = 20 years). Each participant underwent a comprehensive audiological evaluation. The reports showed that all participants had bilateral sensorineural hearing loss (SNHL) of varying severities and were candidates for hearing aids. Otolaryngological assessments confirmed that no participants required medical or surgical intervention for their hearing loss.
- Procedure:
- o
- Basic audiological evaluation. All audiological tests were conducted in a double-walled sound-treated room. Pure-tone air conduction thresholds were determined using a calibrated two-channel diagnostic audiometer (Piano, Inventis, Padova, Italy) with TDH-39 headphones (Telephonics, Farmingdale, NY, USA). Bone conduction hearing threshold levels (HTLs) were measured using a B-71 bone vibrator (Radioear, Middelfart, Denmark). HTLs were estimated using the modified Hughson–Westlake procedure. The pure-tone average (PTA) was calculated for each ear separately by averaging the air conduction HTLs at 0.5, 1, 2, and 4 kHz.
- o
- Speech recognition threshold (SRT). The speech recognition threshold (SRT) in quiet was measured in the same session as pure-tone audiometry using paired words developed by the All-India Institute of Speech and Hearing in Malayalam. Recorded stimuli were presented through the audiometer. The SRT for each ear was determined using the method outlined by ASHA [31].
- o
- Maximum speech identification score (PBmax). The PBmax score was obtained for each ear using two phonemically balanced Malayalam wordlists [6], as described in Experiment 1. One list was presented for each ear through the audiometer using recorded materials. The speech identification score (SIS) for each ear was estimated using the method outlined by ASHA [31]. Masking in the non-test ear was used when necessary according to standard masking rules [32].
3.2. Results
3.3. Discussion
4. Experiment 3: Perception of Vowel Combinations in Different Degrees of Hearing Loss
4.1. Materials and Methods
- Subjects: 37 participants with various degrees of SNHL (mild = 12, moderate = 14, and moderately severe = 11) with flat-to-slightly sloping audiogram configurations were recruited. All participants were experiencing symmetrical hearing loss. The participants’ ages ranged from 25 to 60 years, with a mean age of 34 and a standard deviation of 8.8 years. The participants underwent a complete audiological evaluation, confirming the presence of SNHL at varying degrees, with all participants being candidates for hearing aids. The otolaryngology report confirmed that none of the participants required any medical or surgical treatment for hearing loss. The mean and SD of audiometric thresholds across the frequencies 250 to 8000 Hz of participants are shown in Figure 4.
- Stimuli: Disyllabic words (CVCV) were sourced from the Malayalam dictionary. Malayalam has eleven monophthongs and five diphthongs. From these, 17 vowel pairs (combinations) were selected, which were most commonly occurring in Malayalam (See Table 2). At least ten words for each vowel combination were made with different consonants collected and recorded. A native speaker of Malayalam, a 25-year-old female, spoke these words. The recording was conducted in a quiet environment with a noise level of less than 35 dB SPL. The Rode NT-USB+ (RØDE©, Sydney, NSW, Australia) was used to make the recordings. The speaker was instructed to speak at a normal conversational speed and volume. Multiple samples were recorded, and the sample with the minimum perturbation and maximum clarity was selected. Minimum perturbation was based on the Jitter, Shimmer, Harmonic-to-Noise, and Noise-to-Harmonic Ratios, whereas clarity was rated on a seven-point naturalness, clarity, and intelligibility scale, judged by three professional speech–language pathologists. The recordings with minimum perturbation that were decisively judged as the most clear were selected.
- Procedure: The procedures of stimulus presentation and recording the responses for SRT and SIS testing were the same as in Experiment 2.
4.2. Results
4.3. Discussion
5. General Discussion
6. Limitations and Future Directions
7. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hall, J.W. Diagnostic applications of speech audiometry. In Proceedings of Seminars in Hearing; Thieme Medical Publishers, Inc.: New York, NY, USA, 1983; pp. 179–203. [Google Scholar]
- Lawson, G.D.; Peterson, M.E. Speech Audiometry; Plural Publishing, Incorporated: San Diego, CA, USA, 2011. [Google Scholar]
- Thibodeau, L.M. Speech audiometry. In Audiology: Diagnosis, 2nd ed.; Roeser, R.J., Michael, V., Hosford-Dunn, H., Eds.; Thieme: New York, NY, USA, 2007; pp. 288–313. [Google Scholar]
- Talbott, R.E.; Larson, V.D. Research needs in speech audiometry. In Proceedings of Seminars in Hearing; Thieme Medical Publishers, Inc.: New York, NY, USA, 1983; pp. 299–308. [Google Scholar]
- Peter, B.; Sanghvi, S.; Narendran, V. Inclusion of Interstate Migrant Workers in Kerala and Lessons for India. Indian J. Labour Econ. 2020, 63, 1065–1086. [Google Scholar] [CrossRef] [PubMed]
- Kacker, S.K.; Basavaraj, V. Indian Speech, Language and Hearing Tests—The ISHA Battery; AIIMS: New Delhi, India, 1990. [Google Scholar]
- Popescu, I.-I.; Naumann, S.; Kelih, E.; Rovenchak, A.; Sanada, H.; Overbeck, A.; Smith, R.; Cech, R.; Mohanty, P.; Wilson, A. Word length: Aspects and languages. Issues Quant. Linguist. 2013, 3, 224–281. [Google Scholar]
- Bayer, J.; Babu, M.T.H.; Bhattacharya, T. Linguistic Theory and South Asian Languages; John Benjamins Publishing Company: Amsterdam, The Netherlands, 2007. [Google Scholar]
- Jayaram, B.D.; Vidya, M.N. The Relationship between Word Length and Frequency in Indian Languages. Glottotheory 2009, 2, 62–69. [Google Scholar] [CrossRef]
- Carlo, M.A.; Wilson, R.H.; Villanueva-Reyes, A. Psychometric Characteristics of Spanish Monosyllabic, Bisyllabic, and Trisyllabic Words for Use in Word-Recognition Protocols. J. Am. Acad. Audiol. 2020, 31, 531–546. [Google Scholar] [CrossRef]
- Nissen, S.L.; Harris, R.W.; Jennings, L.J.; Eggett, D.L.; Buck, H. Psychometrically equivalent trisyllabic words for speech reception threshold testing in Mandarin. Int. J. Audiol. 2005, 44, 391–399. [Google Scholar] [CrossRef]
- Nissen, S.L.; Harris, R.W.; Dukes, A. Word recognition materials for native speakers of Taiwan Mandarin. Am. J. Audiol. 2008, 17, 68–79. [Google Scholar] [CrossRef] [PubMed]
- Yathiraj, A.; Manjula, P.; Vanaja, C.S.; Ganapathy, H. Prediction of Speech Identification Score Using Speech Intelligibility Index; All India Institute of Speech and Hearing: Mysore, India, 2013. [Google Scholar]
- Soh, K. Validation of Mandarin Speech Audiometry Materials in Singapore. Master’s Thesis, National University of Singapore, Singapore, 2017. [Google Scholar]
- Lee, G.J.C.; Lee, S.L.H. Development of SC-10: A psychometrically equivalent Singapore Mandarin disyllabic word list for clinical speech audiometry use. World J. Otorhinolaryngol. Head Neck Surg. 2021, 7, 247–256. [Google Scholar] [CrossRef]
- Dubno, J.R.; Lee, F.S.; Klein, A.J.; Matthews, L.J.; Lam, C.F. Confidence limits for maximum word-recognition scores. J. Speech Lang. Hear. Res. 1995, 38, 490–502. [Google Scholar] [CrossRef]
- Dirks, D.D.; Kamm, C.; Bower, D.; Betsworth, A. Use of Performance-Intensity Functions for Diagnosis. J. Speech. Lang. Hear. Dis. 1977, 42, 408–415. [Google Scholar] [CrossRef]
- Narne, V.K.; Sreejith, V.; Tiwari, N. Long-term average speech spectra and dynamic ranges of 17 Indian languages. Am. J. Audiol. 2021, 30, 1096–1107. [Google Scholar] [CrossRef]
- Moulin, A.; Richard, C. Lexical Influences on Spoken Spondaic Word Recognition in Hearing-Impaired Patients. Front. Neurol. 2015, 9, 476. [Google Scholar] [CrossRef] [PubMed]
- Miller, G.A.; Heise, G.A.; Lichten, W. The intelligibility of speech as a function of the context of the test materials. J. Exp. Psychol. 1951, 41, 329–335. [Google Scholar] [CrossRef] [PubMed]
- Boothroyd, A.; Nittrouer, S. Mathematical treatment of context effects in phoneme and word recognition. J. Acoust. Soc. Am. 1988, 84, 101–114. [Google Scholar] [CrossRef] [PubMed]
- Olsen, W.O.; Van Tasell, D.J.; Speaks, C.E. Phoneme and word recognition for words in isolation and in sentences. Ear. Hear. 1997, 18, 175–188. [Google Scholar] [CrossRef] [PubMed]
- Felty, R. Confusion patterns and response bias in spoken word recognition of German disyllabic words and nonword. In Proceedings of the 16th International Congress of the Phonetic Sciences, Saarbrücken, Germany, 6–10 August 2007; pp. 1957–1960. [Google Scholar]
- Manjula, P.; Antony, J.; Kumar, K.S.S.; Geetha, C. Development of phonemically balanced word lists for adults in Kannada language. J. Hear. Sci. 2020, 5, 22–30. [Google Scholar] [CrossRef]
- Kumar, S.R.; Mohanty, P. Speech recognition performance of adults: A proposal for a battery for telugu. Theory Pract. Lang. Stud. 2012, 2, 193–204. [Google Scholar] [CrossRef]
- Chinnaraj, G.; Neelamegarajan, D.; Ravirose, U. Development, standardization, and validation of bisyllabic phonemically balanced Tamil word test in quiet and noise. J. Hear. Sci. 2021, 11, 42–47. [Google Scholar] [CrossRef]
- Kumar, S.R.; Mohanty, P.; Ujawane, P.A.; Huzurbazar, Y.R. Conventional speech identification test in marathi for adults. Int. J. Otorhinolaryngol. Head Neck Surg. 2016, 2, 205–215. [Google Scholar] [CrossRef]
- Hassani, H.; Ahadi, M.; Jarollahi, F.; Jalaie, S. Development of Persian Monosyllabic and Disyllabic Words for Auditory Test of Adults and Evaluation of Their Face Validity Using Psychometric Function. Audit. Vestib. Res. 2024, 33, 202–207. [Google Scholar] [CrossRef]
- Turrini, M.; Cutugno, F.; Maturi, P.; Prosser, S.; A Leoni, F.; Arslan, E. Bisyllabic words for speech audiometry: A new italian material. Acta Otorhinolaryngol. 1993, 13, 63–77. [Google Scholar]
- Moulin, A.; Bernard, A.; Tordella, L.; Vergne, J.; Gisbert, A.; Martin, C.; Richard, C. Variability of word discrimination scores in clinical practice and consequences on their sensitivity to hearing loss. Eur. Arch. Otorhinolaryngol. 2017, 274, 2117–2124. [Google Scholar] [CrossRef] [PubMed]
- ASHA. Determining Threshold Level for Speech; No. GL1988-00008; ASHA: Rockville, MD, USA, 1988; Volume 30, pp. 85–89. [Google Scholar] [CrossRef]
- Roeser, R.J.; Clark, J.L. Clinical Masking. In Audiology: Diagnosis, 2nd ed.; Roeser, R.J., Michael, V., Hosford-Dunn, H., Eds.; Thieme: New York, NY, USA, 2007; pp. 602–658. [Google Scholar]
- R Core Team. R: A Language and Environment for Statistical Computing, version 4.2.1 (2022); R Foundation for Statistical Computing: Vienna, Austria, 2019. [Google Scholar]
- RStudio Team. RStudio: Integrated Development for R, version 2022.12.0+353; RStudio, Inc.: Boston, MA, USA, 2022. [Google Scholar]
- Wickham, H. ggplot2: Elegant Graphics for Data Analysis; Springer: New York, NY, USA, 2016. [Google Scholar]
- Smith, M.L.; Winn, M.B.; Fitzgerald, M.B. A Large-Scale Study of the Relationship Between Degree and Type of Hearing Loss and Recognition of Speech in Quiet and Noise. Ear. Hear. 2024, 45, 915–928. [Google Scholar] [CrossRef] [PubMed]
- Fitzgerald, M.B.; Gianakas, S.P.; Qian, Z.J.; Losorelli, S.; Swanson, A.C. Preliminary Guidelines for Replacing Word-Recognition in Quiet With Speech in Noise Assessment in the Routine Audiologic Test Battery. Ear. Hear. 2023, 44, 1548–1561. [Google Scholar] [CrossRef] [PubMed]
- Margolis, R.H.; Wilson, R.H.; Saly, G.L. Clinical Interpretation of Word-Recognition Scores for Listeners with Sensorineural Hearing Loss: Confidence Intervals, Limits, and Levels. Ear. Hear. 2023, 44, 1133–1139. [Google Scholar] [CrossRef]
- Neha, S.; Narne, V.K. Comparison of Presentation Levels to Maximize Word Recognition Scores in Individuals with Sensorineural Hearing Loss; JSS Institute of Speech and Hearing: Mysore, India, 2017. [Google Scholar]
- Dhanya, M. Perceptual Cues of Coarticulation in Malayalam in Normal Hearing and Hearing Impaired Individuals; University of Mysore: Mysore, India, 2022. [Google Scholar]
- Fogerty, D.; Humes, L.E. Perceptual contributions to monosyllabic word intelligibility: Segmental, lexical, and noise replacement factors. J. Acoust. Soc. Am. 2010, 128, 3114–3125. [Google Scholar] [CrossRef]
- Owren, M.J.; Cardillo, G.C. The relative roles of vowels and consonants in discriminating talker identity versus word meaninga). J. Acoust. Soc. Am. 2006, 119, 1727–1739. [Google Scholar] [CrossRef]
- Buss, E.; Felder, J.; Miller, M.K.; Leibold, L.J.; Calandruccio, L. Can Closed-Set Word Recognition Differentially Assess Vowel and Consonant Perception for School-Age Children With and Without Hearing Loss? J. Speech. Lang. Hear. Res. 2022, 65, 3934–3950. [Google Scholar] [CrossRef]
- Chen, F.; Wong, M.L.Y.; Zhu, S.; Wong, L.L.N. Relative contributions of vowels and consonants in recognizing isolated Mandarin words. J. Phon. 2015, 52, 26–34. [Google Scholar] [CrossRef]
- Fogerty, D.; Kewley-Port, D.; Humes, L.E. The relative importance of consonant and vowel segments to the recognition of words and sentences: Effects of age and hearing loss. J. Acoust. Soc. Am. 2012, 132, 1667–1678. [Google Scholar] [CrossRef]
- Anderson, S.; Parbery-Clark, A.; White-Schwoch, T.; Drehobl, S.; Kraus, N. Effects of hearing loss on the subcortical representation of speech cues. J. Acoust. Soc. Am. 2013, 133, 3030–3038. [Google Scholar] [CrossRef]
- Hedrick, M.; Charles, L.; Street, N.D. Vowel Perception in Listeners With Normal Hearing and in Listeners With Hearing Loss: A Preliminary Study. Clin. Exp. Otorhinolaryngol. 2015, 8, 26–33. [Google Scholar] [CrossRef] [PubMed]
- Liberman, A.M.; Delattre, P.; Cooper, F.S. The role of selected stimulus-variables in the perception of the unvoiced stop consonants. Am. J. Psychol. 1952, 65, 497–516. [Google Scholar] [CrossRef] [PubMed]
- Dubno, J.R.; Levitt, H. Predicting consonant confusions from acoustic analysis. J. Acoust. Soc. Am. 1981, 69, 249–261. [Google Scholar] [CrossRef] [PubMed]
- Woods, D.L.; Yund, E.W.; Herron, T.J.; Cruadhlaoich, M.A.I.U. Consonant identification in consonant-vowel-consonant syllables in speech-spectrum noise. J. Acoust. Soc. Am. 2010, 127, 1609–1623. [Google Scholar] [CrossRef]
- Redford, M.A.; Diehl, R.L. The relative perceptual distinctiveness of initial and final consonants in CVC syllables. J. Acoust. Soc. Am. 1999, 106, 1555–1565. [Google Scholar] [CrossRef]
- Sagi, E.; Svirsky, M.A. Contribution of formant frequency information to vowel perception in steady-state noise by cochlear implant users. J. Acoust. Soc. Am. 2017, 141, 1027–1038. [Google Scholar] [CrossRef]
PTA Range | Number of Subjects | Mean | SD | 25% | 75% | Min | Max |
---|---|---|---|---|---|---|---|
−10–15 | 12 | 1 | 0.0 | 100 | 100 | 100 | 100 |
16–25 | 95 | 97.0 | 14.0 | 96 | 100 | 20 | 100 |
26–40 | 198 | 95.0 | 11.0 | 92 | 100 | 35 | 100 |
41–55 | 223 | 91.0 | 10.0 | 85 | 100 | 45 | 100 |
56–70 | 241 | 79.0 | 19.0 | 72 | 92 | 24 | 100 |
71–90 | 225 | 66.0 | 25.0 | 44 | 84 | 20 | 100 |
>91 | 6 | 34.0 | 16.0 | 24 | 45 | 24 | 56 |
Current Wordlist | In Malayalam Language | ||
---|---|---|---|
Vowel Combinations | Frequency of Occurrence | Percentage of Occurrence | Percentage of Occurrence |
\a-a\ | 12 | 40 | 14 |
\a-u\ | 6 | 20 | 1.4 |
\u-u\ | 4 | 13 | 0.3 |
\i-a\ | 2 | 7 | 0.3 |
\i:-a\ | 2 | 7 | 5.4 |
\e:-a\ | 2 | 7 | 0.3 |
\o-a\ | 1 | 3 | 0.3 |
\ai a\ | 1 | 3 | 0.3 |
Vowels | F | p | η2 |
---|---|---|---|
ə-u | 72.87 | <0.001 | 0.811 |
ə-ə | 16.74 | <0.001 | 0.496 |
a:-u | 08.59 | <0.01 | 0.336 |
a:-ə | 04.87 | 0.23 | 0.223 |
a:-ə̃ | 10.44 | <0.05 | 0.381 |
a:-i | 10.84 | <0.05 | 0.389 |
ai -i | 60.90 | <0.001 | 0.782 |
ai-ə̃ | 07.43 | <0.05 | 0.304 |
e -ə | 39.68 | <0.001 | 0.700 |
i-a | 03.20 | 0.9 | 0.158 |
i-i | 31.32 | <0.001 | 0.648 |
e:-u | 05.97 | 0.1 | 0.260 |
i:-ə | 08.91 | <0.05 | 0.344 |
i:-ai | 17.78 | <0.001 | 0.511 |
i-ə̃ | 06.62 | 0.068 | 0.280 |
i:-i | 15.73 | <0.005 | 0.481 |
o:-u | 11.25 | <0.01 | 0.398 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Narne, V.K.; Mohan, D.; Badariya, M.; Avileri, S.D.; Jain, S.; Ravi, S.K.; Krishna, Y.; Hussain, R.O.; Almudhi, A. The Influence of Vowels on the Identification of Spoken Disyllabic Words in the Malayalam Language for Individuals with Hearing Loss. Diagnostics 2024, 14, 2707. https://doi.org/10.3390/diagnostics14232707
Narne VK, Mohan D, Badariya M, Avileri SD, Jain S, Ravi SK, Krishna Y, Hussain RO, Almudhi A. The Influence of Vowels on the Identification of Spoken Disyllabic Words in the Malayalam Language for Individuals with Hearing Loss. Diagnostics. 2024; 14(23):2707. https://doi.org/10.3390/diagnostics14232707
Chicago/Turabian StyleNarne, Vijaya Kumar, Dhanya Mohan, M. Badariya, Sruthi Das Avileri, Saransh Jain, Sunil Kumar Ravi, Yerraguntla Krishna, Reesha Oovattil Hussain, and Abdulaziz Almudhi. 2024. "The Influence of Vowels on the Identification of Spoken Disyllabic Words in the Malayalam Language for Individuals with Hearing Loss" Diagnostics 14, no. 23: 2707. https://doi.org/10.3390/diagnostics14232707
APA StyleNarne, V. K., Mohan, D., Badariya, M., Avileri, S. D., Jain, S., Ravi, S. K., Krishna, Y., Hussain, R. O., & Almudhi, A. (2024). The Influence of Vowels on the Identification of Spoken Disyllabic Words in the Malayalam Language for Individuals with Hearing Loss. Diagnostics, 14(23), 2707. https://doi.org/10.3390/diagnostics14232707