Next Article in Journal
The Relationship between Language Educators’ Perceptions and Assessment Practices during the COVID-19 Crisis
Next Article in Special Issue
Is Full-Time Equivalent an Appropriate Measure to Assess L1 and L2 Perception of L2 Speakers with Limited L2 Experience?
Previous Article in Journal
Recycling a Mixed Language: Posha in Turkey
Previous Article in Special Issue
How Good Does This Sound? Examining Listeners’ Second Language Proficiency and Their Perception of Category Goodness in Their Native Language
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Examining the Role of Phoneme Frequency in First Language Perceptual Attrition

1
Department of Linguistics, Boston University, Boston, MA 02215, USA
2
Department of Linguistics, University of Manitoba, Winnipeg, MB R3T 2N2, Canada
*
Authors to whom correspondence should be addressed.
Languages 2023, 8(1), 53; https://doi.org/10.3390/languages8010053
Submission received: 1 September 2022 / Revised: 25 January 2023 / Accepted: 28 January 2023 / Published: 10 February 2023

Abstract

:
In this paper, we follow up on previous findings concerning first language (L1) perceptual attrition to examine the role of phoneme frequency in influencing variation across L1 contrasts. We hypothesized that maintenance of L1 Korean contrasts (i.e., resistance to attrition) in L1 Korean-L2 English bilinguals would be correlated with frequency, such that better-maintained contrasts would also be more frequent in the L1. To explore this hypothesis, we collected frequency data on three Korean contrasts (/n/-/l/, /t/-/t*/, /s/-/s*/) and compared these data to perceptual attrition data from a speeded sequence recall task testing the perception and phonological encoding of the target contrasts. Results only partially supported the hypothesis. On the one hand, /n/-/l/, the best-maintained contrast, was the most frequent contrast overall. On the other hand, /n/-/l/ also evinced the greatest frequency asymmetry between the two members of the contrast (meaning that it was the least important to perceive accurately); furthermore, /s/-/s*/, which was less well maintained than /t/-/t*/, was actually more frequent than /t/-/t*/. These results suggest that disparities in perceptual attrition across contrasts cannot be attributed entirely to frequency differences. We discuss the implications of the findings for future research examining frequency effects in L1 perceptual change.

1. Introduction

In contrast to formalist linguistic frameworks (e.g., Chomsky 1986; Baechler and Pröll 2019), research from functionalist and in particular usage-based perspectives (e.g., Bybee 2001, 2006; Croft 2001; Halliday and Matthiessen 2004; Wedel et al. 2013a, 2013b) has pointed out various ways in which context and experience seem to shape how language is processed, acquired and changed. Among the aspects of experience that have been examined in this regard, frequency—that is, how often something occurs in a language and is therefore able to be experienced and/or used by the language user—has played a central role in explaining variation in the production and perception of spoken language (e.g., Jescheniak and Levelt 1994; Baus et al. 2008).
In this paper, we take a close look at context-specific phoneme frequency as a predictor of variation in perception of first language (L1) phonological contrasts by sequential bilinguals living in a second language (L2) environment. More specifically, we are concerned with accounting for variation in L1 perceptual attrition, the changes that occur in perception of the L1 as a consequence of extensive exposure to an L2. Although the term “attrition” may be used in contradistinction to other terms for L1 change, such as “drift” (de Leeuw and Chang 2023), we use the term “perceptual attrition” here in a very general sense, to describe any L1 change resulting in divergence from the perceptual patterns of L1 listeners living in an L1-dominant environment, regardless of the timing of L2 exposure (e.g., before or after a “critical period” for language acquisition) or the predicted persistence of the change over the lifespan.
Our motivation for examining frequency effects on bilinguals’ L1 perception comes from two observations. First, there is often considerable variation in L1 perception by bilinguals, particularly bilinguals who have become dominant in their L2 early in life (i.e., “heritage speakers”; see Chang 2021 for a recent review), which merits further study. Second, despite playing an influential role in psycholinguistic research on word recognition and production and in L2 acquisition research more generally (see, e.g., Ellis 2002, 2013; Gor et al. 2021), frequency has not had a major impact on work examining bilingual speech perception specifically. By bringing the study of frequency into research on bilingual speech perception, we therefore aim to bridge the divide between psycholinguistic research on frequency effects and experimental work on bilingual speech perception and to contribute to an enriched view of variation in perceptual attrition.
In the remainder of the paper, we describe a corpus-based study addressing the following research question: is variation in perceptual attrition across phonological contrasts correlated with differences in their frequency of occurrence? In other words, is it possible to account for variation in perceptual attrition in terms of frequency effects? We begin by reviewing previous work on perceptual attrition and on frequency effects in speech production and perception, which leads to the two predictions tested in this study.

1.1. Perceptual Attrition

A complex process influenced by several factors including both top-down and bottom-up information, speech perception is fundamentally malleable and “there has been a growing appreciation of the need to understand how the perceptual system dynamically changes in order to allow listeners to successfully process the variable input and new words that they constantly encounter” (Samuel 2011, p. 49). One source of change in the L1 perceptual system is L2 exposure, which may result in short- or long-term changes to perception of L1 speech (referred to here under the umbrella term “perceptual attrition”). Indeed, some studies have shown that even relatively little L2 exposure—on the order of hours, days or weeks, as opposed to years—may lead to perceptual attrition (Tice and Woodley 2012; Gong et al. 2016; Kellogg and Chang 2023). For example, Tice and Woodley (2012) showed that novice adult L1 English learners of L2 French shifted the perceptual boundary between English voiced and voiceless stops in terms of voice onset time (VOT) as early as three weeks after the beginning of French instruction.
Evidence of perceptual attrition—and, crucially, of variation in perceptual attrition—is also found in the literature on earlier-onset bilinguals. In one study examining listeners’ ability to discriminate Russian plain and palatalized consonants (Lukyanchenko and Gor 2011), English-dominant heritage speakers of Russian showed a perceptual advantage over both high- and low-proficiency L2 speakers and often, but not always, patterned like L1-dominant native speakers. In particular, whereas they were native-like on both labials and coronals in word-initial position, as well as on coronals in word-final position, they were not native-like on labials in word-final position, where the plain-palatalized contrast is “less acoustically salient” (Lukyanchenko and Gor 2011, p. 424). In a different study examining discrimination and speeded sequence recall of Korean and English consonant contrasts (Lee-Ellis 2012), English-dominant heritage speakers of Korean also outperformed L2 learners and patterned like L1-dominant native speakers on the Korean-specific contrast /s/-/s*/, but not across the board. To be specific, they were native-like on /s/-/s*/ in a single-talker discrimination task (i.e., with low talker variability), but not in a multi-talker discrimination task or in the memory-intensive sequence recall task. Together, these previous findings suggest that although early bilinguals may often appear to be native-like in their perception of the L1, they also tend to diverge from L1-dominant native speakers in some ways, depending on the L1 contrast, context and task demands.
Later research on L1 Korean-L2 English bilinguals living in the US investigated the role of multiple factors in perceptual attrition, including socio-demographic and linguistic variables (Ahn et al. 2017). In this study (described more fully in Section 2.1, Section 2.2 and Section 2.3), three factors were consistently found to be significant predictors of L1 perceptual accuracy: age of reduced contact with the L1, amount of L1 education and L1 contrast type. A positive effect of age of reduced contact provided evidence that there was, indeed, perceptual attrition (or possibly “incomplete acquisition”; see Montrul 2008) among the bilinguals in the US as compared to L1-dominant Korean speakers in Korea: the earlier a bilingual’s age of reduced contact with Korean, the less likely they were to perceive L1 contrasts accurately. A positive effect of L1 education indicated that formal educational experience with the L1 mitigated perceptual attrition, possibly through a kind of phonological reinforcement provided by literacy. And finally, the effect of L1 contrast type suggested that phonological similarity with an L2 contrast mitigated perceptual attrition as well. To be specific, the contrast type closely corresponding to an L2 contrast (/n/-/l/) showed no perceptual attrition (i.e., no effect of age of reduced contact on accuracy), whereas the contrast type less closely resembling an L2 contrast (/t/-/t*/) showed significant perceptual attrition. Crucially, the contrast type resembling no L2 contrast (/s/-/s*/) showed the most perceptual attrition (i.e., the strongest effect of age of reduced contact), further supporting the crosslinguistic similarity account of the disparities in perceptual attrition across contrasts.
Although the variation in attrition across L1 contrasts observed in Lukyanchenko and Gor (2011) and Ahn et al. (2017) was attributed to differences in perceptual salience or crosslinguistic similarity, another potential explanation for this variation, alluded to in Ahn et al. (2017, pp. 726–27), is frequency effects. In the next section, we review prior findings on frequency effects in speech production and perception, in service of motivating the frequency-based predictions for perceptual attrition tested in the current study.

1.2. Frequency Effects

Frequency of occurrence is known to affect linguistic behavior and language change in a variety of ways, including the learning of an L2 (for a recent overview of frequency effects in L2 acquisition, see Ellis 2013). Much of the literature on frequency effects has focused on word frequency in English, finding effects of word frequency on speech production at the lemma level (e.g., Navarrete et al. 2006) and at the specific-word (as opposed to homophone) level (e.g., Caramazza et al. 2001; Cuetos et al. 2010).
A different strand of research related to word frequency has investigated effects of functional load and differences in frequency between the two members of a minimal pair on the diachronic merger of phonemic contrasts. In this work, the functional load of a contrast (measured in terms of minimal pair count; i.e., how frequently the contrast is used to distinguish words) was found to be inversely correlated with the likelihood of merger across diverse languages (including Indo-European languages such as English and Spanish, as well as non-Indo-European languages such as Korean and Cantonese), while phoneme probability, in the case of phonemes distinguishing no minimal pairs, was found to be positively correlated with the likelihood of merger (Wedel et al. 2013a). Furthermore, the predictiveness of minimal pair count was enhanced by certain aspects of minimal pairs: contrasting at the lemma (as opposed to surface form) level, having the same syntactic category (as opposed to different syntactic categories) and having similar (as opposed to different) frequencies (Wedel et al. 2013b). In related research, the production of VOT in English voiceless stops was found to be influenced by the existence and frequency of a minimally contrasting (voiced stop) competitor word (Nelson and Wedel 2017; see also Baese-Berk and Goldrick 2009). Together, these findings suggest that the influence of a phonologically similar word in the lexicon is enhanced by characteristics that make it a stronger competitor to a given target word, including high frequency. Moreover, because this influence increases the distance between target and competitor words, it can be interpreted as designed, at least in part, to maximize the likelihood of accurate perception by listeners.
Other work has focused on the frequencies of units smaller than the word, including syllables, phonemes and phoneme sequences. For example, in one study of L1 Dutch speakers, significant effects of syllable frequency—in particular, of the first syllable—were found in the production of nonce words, supporting the view that speakers have a “mental syllabary” of precompiled articulatory plans for different syllables that is drawn upon in speech production (Cholin et al. 2006). The results of a series of experiments manipulating the delay between phonological and phonetic encoding further suggested that the locus of syllable frequency effects is in phonetic encoding specifically (Laganaro and Alario 2006). Effects of syllable frequency, which were often independent of phoneme frequency, were also found in the production errors and nonce word repetition accuracy of French-, Italian- and Spanish-speaking aphasic subjects (Laganaro 2005). As for phoneme frequency effects, data on speech errors elicited from English speakers suggested that more frequent phonemes are relatively “strong”, more resistant to errors as target sounds and more likely to be erroneously substituted for other target sounds (Levitt and Healy 1985), as well as more likely to be produced in cases of aphasia (Robson et al. 2003). Along similar lines, research examining the role of sequence frequency found that children repeated frequent sequences of phonemes more accurately, more quickly and less variably than infrequent sequences (Munson 2001). Thus, findings on frequencies at levels below the word largely converge with those on word-level frequencies in showing a facilitative effect of high frequency, which seems to benefit both encoding and access.
Research on frequency effects in perception has similarly shown a facilitative effect of high frequency. The literature on auditory word recognition has provided extensive evidence that high-frequency words are processed more efficiently than low-frequency words, albeit with individual differences (for a recent review, see Brysbaert et al. 2018). For example, results on French showed a word frequency effect on lexical access of both open- and closed-class items (Segui et al. 1982), an effect extending to suffixed forms (Meunier and Segui 1999), while data on perception of auditorily ambiguous speech tokens from voicing continua showed a response bias toward high-frequency words (Connine et al. 1993; but cf. Politzer-Ahles et al. 2020). Neuroimaging data further indicated that the word frequency effect is present from early stages of lexical activation (Dufour et al. 2013). In connection with frequency-based asymmetries in perception, recent work has pointed toward a central role for the listener in explaining the observed variability in word frequency effects in sound change (Todd et al. 2019). On the other hand, findings on phoneme frequency effects are mixed: data from Dutch listeners showed only a limited effect of phoneme frequency on perception of diphones (Warner et al. 2005), whereas data from English and Japanese listeners showed a significant frequency-based bias in perception of place of articulation in /t/-to-/k/ speech continua (Yoneyama et al. 2011). In short, what we can take away from these findings is that, as in speech production, frequency plays a significant and often facilitative role in speech perception, which motivates a systematic investigation of frequency effects in perceptual attrition.

1.3. The Present Study

In the present study, we followed up on the findings of Ahn et al. (2017) to consider an alternative explanation of the disparities in perceptual attrition across L1 Korean contrasts observed in L1 Korean-L2 English bilinguals. Recall that these disparities were attributed by Ahn et al. to differences in phonological similarity to L2 English contrasts. Here we tested the hypothesis that these disparities reflect frequency effects instead of differences in crosslinguistic similarity per se. Note that this hypothesis follows naturally from an exemplar-based model of phonology in which linguistic experience with all aspects of the speech signal is central to the representation and access of phonological knowledge (Johnson 1997; Pierrehumbert 2001). In such a model, higher frequency (i.e., more exemplars) of a given linguistic unit should influence speech perception by way of increasing the resting activation level of certain phonological representations over others, thereby making those representations more accessible. Thus, an exemplar model can explain the types of frequency effects in speech perception discussed above.
Under an exemplar model, we made two predictions about the relative frequencies of the three Korean contrasts tested in Ahn et al. (2017). Our first prediction (P1) was that the most well-maintained (i.e., least attrited) contrast (/n/-/l/) would be the most frequent of the three contrasts overall. Our second prediction (P2) was that the least well-maintained (i.e., most attrited) contrast (/s/-/s*/) would be the least frequent contrast overall. That is, we predicted there to be a frequency order corresponding to the order of contrast maintenance (where “>” indicates both “more frequent than” and “more well-maintained than”): /n/-/l/ > /t/-/t*/ > /s/-/s*/. To put this in terms of attrition, we predicted that degree of attrition would be inversely correlated with frequency, under the (exemplar-based) logic that less frequent contrasts, due to mostly decayed exemplars, have lower resting activation levels and, therefore, are more vulnerable to attrition compared to more frequent contrasts.
To be clear, our predictions were related to degree of attrition, which we measured in terms of difference from a baseline (here, accuracy levels of L1 listeners in an L1-dominant environment) and not to accuracy levels themselves. This is because the raw accuracy of perceiving of a given contrast may be influenced by other factors apart from frequency, such as perceptual salience or the psychoacoustic distance between the members of the contrast. As a result, contrasts differ in their perceptibility even among L1-dominant native listeners who are not undergoing attrition, as shown in Lukyanchenko and Gor (2011) as well as Ahn et al. (2017). In the case of Ahn et al. (2017), L1 Korean controls living in Korea showed high accuracies overall, but they were the most accurate on /s/-/s*/ (85% accuracy on average), less accurate on /n/-/l/ (73% accuracy) and the least accurate on /t/-/t*/ (70% accuracy). By contrast, bilinguals were the most accurate on /n/-/l/ (81% accuracy), less accurate on /s/-/s*/ (64% accuracy) and the least accurate on /t/-/t*/ (50% accuracy). Therefore, in terms of attrition (i.e., difference from controls), bilinguals showed the least attrition on /n/-/l/ (actually 8% higher accuracy than controls), more attrition on /t/-/t*/ (20% lower accuracy than controls) and the most attrition on /s/-/s*/ (21% lower accuracy than controls). This variation in attrition was reflected in a statistically significant effect of age of reduced contact on perceptual accuracy for /t/-/t*/ and /s/-/s*/ but not /n/-/l/ and furthermore a significantly stronger age effect for /s/-/s*/ than /t/-/t*/. Thus, to reiterate, the main findings of Ahn et al. (2017) concerned a hierarchy of attrition as opposed to raw accuracy, and this was the basis for our predictions P1 and P2.
To test these predictions, we collected corpus data on the frequencies of the three target contrasts /n/-/l/, /t/-/t*/ and /s/-/s*/, assuming that the relevant frequencies would be for words whose beginning—that is, first syllable, including consonant-vowel (CV) and consonant-vowel-consonant (CVC) syllable types—phonetically resembles the beginning of the stimulus items where the target contrasts had occurred. This assumption was based on the findings discussed above that suggest a privileged status of frequency effects related to the first syllable of words (Cholin et al. 2006) as well as the “cohort theory” of auditory word recognition (e.g., Marslen-Wilson and Welsh 1978; Slowiaczek et al. 1987), according to which word beginnings have an outsize influence on speech processing. The design of this corpus study is described in further detail below.

2. Materials and Methods

In this section, we first summarize the methodology of the perceptual study in Ahn et al. (2017) and then the methodology of the corpus analyses carried out in the current study to explore the role of frequency in explaining the disparities among contrasts observed in Ahn et al. (2017).

2.1. Participants

Participants in Ahn et al. (2017) comprised a group of L1 Korean-L2 English bilinguals based in the US (N = 21; 16 female; Mage = 26.8 yr, SD = 7.8) and a group of age-matched L1 Korean controls based in Seoul, South Korea (N = 17; 14 female; Mage = 26.8 yr, SD = 7.6). The bilinguals differed from the controls mainly in their exposure to English: while the controls had received primarily educational exposure to English as an L2 (as is compulsory in South Korea), the bilinguals had been immersed in an English-dominant environment (i.e., removed from a Korean-dominant environment) for several years by the time of testing. The bilingual group was constructed to sample a wide range in age of reduced contact with Korean (corresponding to age of arrival in the US), from 3 to 15 years (M = 9.5, SD = 3.4); the majority (12/21) self-reported as dominant in English in a detailed language background questionnaire.
As for the bilinguals’ dialectal background, exposure to non-standard Korean dialects was not documented systematically, but according to post-study debriefings, both the bilinguals and their parents had been living in Seoul—the same place of residence as the Korean control group—before immigrating to the US. Consequently, it is reasonable to assume that the effect of exposure to non-standard dialects in the bilingual group was minimal or, at least, not significantly different from the effect in the Korean control group. Recent research on dialectal variation in Korean suggests further that, for relatively young speakers like those in the bilingual group, dialect leveling has resulted in the loss of certain non-standard features that may impinge upon perception of the target contrasts (e.g., merger of the /s/-/s*/ contrast in North Gyeongsang Korean; see Jang and Shin 2006). In short, we have no reason to believe that the bilinguals in Ahn et al. (2017) were influenced by dialectal exposure encouraging perceptual merger of any of the target contrasts.

2.2. Speech Materials

There were three critical Korean contrasts tested in the focal perceptual task in Ahn et al. (2017): /n/-/l/, a contrast between coronal sonorants that also exists in English; /t/-/t*/, a tenseness contrast between lenis and fortis coronal plosives that bears similarities to the English voicing contrast between /t/ and /d/; and /s/-/s*/, another tenseness contrast between lenis and fortis coronal fricatives that does not resemble any English contrast. For each of these contrasts, a minimal pair of nonce words was created to embed the target sounds in word-initial position before the vowel /a/. Thus, three minimal pairs were created: /nakha/-/lakha/, /takha/-/t*akha/ and /sakha/-/s*akha/. Multiple tokens of each minimal pair were audio-recorded by six L1 Korean talkers (three female) to provide the speech stimuli for the perceptual task.

2.3. Procedure

The focal perceptual task in Ahn et al. (2017) was a speeded sequence recall task. Following an initial familiarization phase showing participants the association between two response buttons and the two members of a target contrast and then a short practice session with feedback, participants completed test trials in which they were played a sequence of four speech stimuli, uttered by four different talkers with an inter-stimulus interval (ISI) of 150 ms, and had to reproduce the sequence from memory using button presses as quickly and accurately as possible. For example, for the contrast /n/-/l/, if /nakha/ was associated with the button ‘1’ and /lakha/ with the button ‘2’, a test trial playing the sequence /nakha/-/lakha/-/lakha/-/nakha/ would need to be responded to with the sequence of button presses 1-2-2-1 in order for the response to be coded as accurate. This task consisted of 84 test trials in all (28 per contrast), which were blocked by contrast.
In Ahn et al. (2017), the main dependent measure was the likelihood of accuracy in the speeded sequence recall task, while the independent measures comprised various socio-demographic and language background variables, such as age, gender, relative L1 use and amount of formal L1 education. The full dataset, along with the study materials used to collect data on the independent measures, is available open-access on the Open Science Framework (OSF) at https://osf.io/tuhwr/. A detailed description of the approach taken to the statistical analyses relating the independent measures to the dependent measure of accuracy is provided in Ahn et al. (2017, pp. 709–11).

2.4. Corpus Analyses

The corpus analyses in the current study were based on the Korean language corpus published by the National Institute of Korean Language (NIKL 2005). Given that there are several options for Korean corpora (for a recent review, see Cho et al. 2020), we had four criteria for selecting a target corpus: (1) reflecting spoken Korean in at least part of the corpus, (2) having been constructed with clear guidelines, (3) being publicly available and (4) including a large amount of labeled and tabulated data that could be easily queried. Consisting of 58,437 lemmas and 3 million wordforms (어절), the National Institute of Korean Language (NIKL) corpus met all of these criteria. Furthermore, the NIKL corpus has been analyzed in previous phonetic studies of Korean (e.g., Yoon and Kang 2014), facilitating connections between prior findings and our results.
Based on the NIKL corpus, we gathered overall frequencies and spoken frequencies (i.e., frequencies in the spoken part of the corpus only) on all Korean words whose beginning overlapped phonologically with that of a stimulus item in Ahn et al. (2017). For example, in connection with the stimulus item /nakha/, we gathered frequencies on all words with the first syllable /na/ (e.g., /na.o.ta/ ‘to come out’, /na.la/ ‘country’), as well as the first syllable /nak/ (e.g., /nak.in/ ‘stigma’, /nak.ha.san/ ‘parachute’), since the aspirated [kh] in Korean, as in /nakha/, can result from the coalescence of /k/ and an adjacent /h/.1 Our narrow focus on words with this degree of phonological overlap with the stimulus items—in particular, overlap in the vowel following the initial target consonant—was motivated by previous findings indicating that the phonetic realization of the target contrasts, especially /s/-/s*/, is highly influenced by vowel context (Chang 2013). For example, although /s/ and /s*/ freely occur with all of the monophthongal vowels of Korean and /s/ is characterized by longer aspiration duration than /s*/ across vowel environments, the difference in aspiration duration between /s/ and /s*/ is much larger in a low vowel environment (i.e., preceding /a/, an open vowel tract configuration) than preceding the high vowels /i ɯ/; similarly, /s/ is associated with a steeper spectral tilt in the following vowel (for various vowel environments) than /s*/, but the difference is largest in the environment of /a/. Such context effects on the fine-grained phonetic realization of the /s/-/s*/ contrast, which also extend to other properties distinguishing the contrast such as frication duration, intensity onset and F1 onset, argue in favor of considering, in the first instance, frequencies from the contexts corresponding most closely to the ones tested in Ahn et al. (2017). Thus, in line with an exemplar model, we assumed that the frequencies exerting the greatest influence on perception of the /s/-/s*/ contrast in /sakha/ vs. /s*akha/, for example, would be those of contextually similar words—namely, words where an initial /s/ or /s*/ occurs in the same vowel context of a following /a/.
Given previous evidence of an item’s phonological neighborhood affecting its processing (e.g., Goldinger et al. 1989; Baker and Bradlow 2009; Gahl et al. 2012; Goldrick et al. 2013; Vitevitch and Luce 2016), it is worth explaining why we did not include phonological neighbors to the target nonce items in our frequency counts. In short, this is because, by design (namely, containing the second syllable /kha/, an uncommon second syllable in the Korean lexicon), each of the items was made to have very few phonological neighbors. For example, in connection with /sakha/, phonological forms differing in terms of the first vowel (i.e., /sikha/, /sεkha/, /sukha/, /sokha/, /sʌkha/, /sɯkha/), while phonotactically legal, are not real words; this is also the case for the other target items. In addition, there are virtually no phonological neighbors to the target items differing in terms of the first consonant (e.g., /akha/, /makha/, /kakha/, etc.). As for phonological neighbors differing in terms of the second consonant, there are more, but still few overall. Excluding inflected forms such as /sa-ta/ ‘buy-declarative’, phonological neighbors of this type comprise six lexical items total: /naka/ ‘go out’, /nala/ ‘country’, /nasa/ ‘bolt’, /taka/ ‘approach’, /saŋa/ ‘ivory’ and /satɕa/ ‘lion’. Crucially, because all of the target items come from sparse neighborhoods, there are no marked asymmetries within or between the pairs of target items in terms of possible competition or facilitation from phonological neighbors. Consequently, we considered the influence of phonological neighborhoods negligible for the purposes of the current study and focused our frequency counts on initially-overlapping lexical items as described above.

3. Results

Recall that we predicted, given the findings of Ahn et al. (2017), that the /n/-/l/ contrast would have the highest frequency, the /t/-/t*/ contrast the next highest frequency and the /s/-/s*/ contrast the lowest frequency. Table 1 summarizes the data gathered from the NIKL corpus on these contrasts, which provided only partial support for these predictions. Note that the frequencies in Table 1 were summed over all words for the given category; for example, the overall frequency for /na/ represents the sum of the frequencies of all words beginning with the syllable /na/. The underlying (by-word) frequency data are included in the Supplementary Materials.
As shown in Table 1, neither the overall frequencies nor the spoken frequencies of the three target contrasts completely followed the predicted order (i.e., /n/-/l/ > /t/-/t*/ > /s/-/s*/). With respect to overall frequency (summed over the two members of the contrast, not counting words with a first syllable containing coda /k/), the most frequent contrast was /s/-/s*/ (26,480), followed by /n/-/l/ (23,013) and then /t/-/t*/ (16,094). This order remained the same when words with a first syllable containing coda /k/ were included in the frequency counts: /s/-/s*/ (26,566) > /n/-/l/ (23,197) > /t/-/t*/ (16,465). On the other hand, with respect to spoken frequency (not counting words with a first syllable containing coda /k/), the most frequent contrast was /n/-/l/ (1129), followed by /s/-/s*/ (723) and then /t/-/t*/ (536), matching the first part of the predicted order (/n/-/l/ > /t/-/t*/) but not the second part (/t/-/t*/ > /s/-/s*/). This order again remained the same when words with a first syllable containing coda /k/ were included: /n/-/l/ (1130) > /s/-/s*/ (729) > /t/-/t*/ (624). In short, although data on spoken frequencies aligned with bilinguals’ overall accuracy levels on the target contrasts, neither set of frequencies supported both P1 and P2.
Because previous findings on frequency effects have pointed to the difference in frequency between minimally contrasting units as potentially relevant (Wedel et al. 2013b), we also examined the difference in frequency (asymmetry) between the two members of each target contrast. This examination revealed that the contrast with the highest spoken frequency, /n/-/l/, showed the largest frequency asymmetry between the two sounds (1102, translating to a frequency split of 98.8% for /n/ vs. 1.2% for /l/ including words with coda /k/), while the contrast with the lowest spoken frequency, /t/-/t*/, showed the smallest frequency asymmetry (366; frequency split of 79.3% for /t/ vs. 20.7% for /t*/ including words with coda /k/), with /s/-/s*/ showing an intermediate frequency asymmetry (673; frequency split of 96.2% for /s/ vs. 3.8% for /s*/ including words with coda /k/). These data are summarized in Table 2.
Crucially, these results on frequency asymmetries mitigate the finding of /n/-/l/ having the highest spoken frequency, because they indicate a low level of uncertainty when it comes to this contrast. Probabilistically, according to the data in Table 2, a word-initial sound that is ambiguous between /n/ and /l/ in the context of /a/ is almost always (98.8% of the time) going to be /n/, meaning that it is comparatively unimportant to be able to accurately perceive this contrast, at least in this context; just by guessing /n/, the listener will usually be right. As such, these results further complicate the picture of how frequency might be related to perceptual attrition. In the case of /n/-/l/, the one contrast whose relative frequency was as predicted, whereas the high frequency of this contrast may inhibit perceptual attrition, the low uncertainty associated with this contrast does not. Indeed, the observed order of uncertainty (i.e., /t/-/t*/ > /s/-/s*/ > /n/-/l/) does not, on its own, predict the observed order of perceptual maintenance (/n/-/l/ > /t/-/t*/ > /s/-/s*/).
As a final confirmatory step to our analysis, we checked the frequency patterns in the NIKL corpus using a preliminary version of a second corpus called SUBTLEX-KR (Tang and de Chene 2014), a large corpus (90 million “orthographic words”, meaning items separated by spaces) of spoken Korean compiled on the basis of film subtitles. The SUBTLEX-KR corpus is complementary to the NIKL corpus in that its substance is all linguistic material that was constructed to be spoken (i.e., movie scripts); this contrasts with the NIKL corpus, where part of the linguistic material was not necessarily constructed to be spoken. Therefore, compared to the NIKL corpus, the SUBTLEX-KR corpus may better reflect usage frequencies of target phonemes in conversational Korean, although its frequency counts for words are likely to be somewhat conservative due to the counting of multi-word phrases contained between spaces as one word rather than separate words. Results from the SUBTLEX-KR corpus are shown in Table 3 and Table 4 below.
While differing slightly from the results found in the NIKL corpus, results from the SUBTLEX-KR corpus were similar in not clearly supporting both P1 and P2. With respect to spoken frequency excluding orthographic words with a first syllable containing coda /k/, the order of contrasts was: /s/-/s*/ (1,302,154) > /n/-/l/ (1,274,963) > /t/-/t*/ (972,992); the order remained the same when orthographic words with a first syllable containing coda /k/ were included: /s/-/s*/ (1,307,613) > /n/-/l/ (1,285,747) > /t/-/t*/ (1,033,079). As for frequency asymmetries, /n/-/l/ and /s/-/s*/ were reversed vis-à-vis the order observed in the NIKL corpus: instead of /n/-/l/, /s/-/s*/ showed the largest frequency asymmetry between the two sounds (1,158,242, translating to a frequency split of 94.3% for /s/ vs. 5.7% for /s*/ including words with coda /k/), while /n/-/l/ showed a slightly smaller frequency asymmetry (1,059,249; frequency split of 91.5% for /n/ vs. 8.5% for /l/ including words with coda /k/). The /t/-/t*/ contrast again showed the smallest frequency asymmetry (740,008; frequency split of 86.6% for /t/ vs. 13.4% for /t*/ including words with coda /k/). Taken together, these results provide additional support for P1 but not for P2. In addition, frequency asymmetries and the order of uncertainty observed in the SUBTLEX-KR corpus (i.e., /t/-/t*/ > /n/-/l/ > /s/-/s*/), while different from those in the NIKL corpus, again contrast with the observed order of perceptual maintenance in Ahn et al. (2017).

4. Discussion

This study explored the hypothesis that L1 perceptual attrition is predicted by low frequency (i.e., that less frequent phonological contrasts undergo more perceptual attrition). In a corpus-based study of Korean, we tested two predictions concerning the relative frequencies of the three Korean contrasts that showed varying degrees of perceptual attrition in Ahn et al. (2017). Our results provided evidence that /n/-/l/, the contrast that showed the least attrition—in fact, no significant attrition—was the most frequent contrast according to the spoken part of the NIKL corpus (although not according to the NIKL corpus overall including both written and spoken data), supporting P1. However, they also indicated that /s/-/s*/, the contrast that showed the most attrition, was not the least frequent contrast, contradicting P2. Furthermore, we observed that the uncertainties associated with the target contrasts, which could also play a role in perceptual attrition, did not match their place in the observed order of attrition, either. Taken together, these findings lead us to the conclusion that, while frequency may, nevertheless, play a role in predicting attrition, frequency effects in attrition are not as straightforward as articulated in our original hypothesis.
The present findings have implications both for research on language attrition and for research on speech perception. First, they do not clearly support a frequency-based account of the variation in perceptual attrition observed in Ahn et al. (2017), leaving the original explanation based on crosslinguistic phonological similarity as the best account of those results. In this respect, the findings converge with the broader body of research demonstrating weak or inconsistent effects of variables which are intuitive, usage-based predictors of L1 attrition, such as length of residence in an L2 environment (de Bot and Clyne 1994; Beganović 2006), amount of L1 use (Jaspaert and Kroon 1989) and amount of receptive L1 input (Schmid 2007, 2011). That is, the present study contributes further to the view that attrition is a complex phenomenon, which is not predicted straightforwardly by individual use variables, pointing toward the need for a multifactorial approach to the study of attrition phenomena. Second, by bolstering the similarity-based account of the attrition disparities in Ahn et al. (2017), the present findings highlight the relevance of considering potential cross-language relationships in selecting target sounds and contrasts for perceptual research on L1 listeners, given that L1 listeners often have recent or ongoing L2 experience that can lead to rapid L1 perceptual changes (Tice and Woodley 2012; Gong et al. 2016; Kellogg and Chang 2023).
Although this study did not find a clear relationship between frequency and perceptual attrition, it represents only the first step in examining frequency effects in this area and there are some important limitations of the present findings that should be addressed in future research. First, the NIKL corpus, the principal basis for the present findings, reflects standard Korean, which may differ in its frequencies to some degree from the varieties to which the bilinguals in Ahn et al. (2017) were exposed. This type of potential disparity between corpus data and the diversity of study participants argues for the need for development of corpora that also reflect non-standard language varieties. Second, frequencies in adult speech, generally directed to other adults or to a general audience in both the NIKL corpus and the SUBTLEX-KR corpus, may differ from those in infant/child-directed speech, a speech register that could be particularly influential for bilinguals who become dominant in their L2 early in life. Because many of the bilinguals in Ahn et al. (2017) were, in fact, such bilinguals, it would therefore be useful to replicate the present findings using a corpus of child-directed Korean, such as the recently developed Ko corpus (Ko et al. 2020). Third, the spoken frequencies in the NIKL corpus represent frequencies in broadcast speech, which may also differ from those in the input to which bilinguals are exposed. Consequently, examining a corpus of spontaneous Korean speech (see, e.g., Yun et al. 2015) would provide valuable additional data on the predictiveness of frequency-based variables for perceptual attrition. Finally, apart from using frequencies from a specific corpus, we tabulated frequencies in an onset-specific way (i.e., counting only lexical items overlapping with the target items in terms of their beginning). This method reflected our assumption that the most influential frequencies for target phonemes in onset position would be those of words in which those phonemes occur precisely in onset position (see also Section 1.2); however, ultimately it remains an open question as to which method of tabulating frequencies best reflects psycholinguistic frequency effects and it is possible that our results would look different if we used another method for tabulating frequencies (e.g., including all words containing a target phoneme anywhere in the word).
Looking forward, we would like to point out two challenges for future research on the role of frequency in perceptual attrition. One is to understand how other frequency-based metrics may or may not interact with frequency itself in potentially influencing attrition. For example, might it be the case that frequency plays a primary role when frequency is high, but recedes into the background when frequency is low, leaving other factors to take over? This possibility could help salvage the unclear picture of frequency effects observed in the current study. In the discussion above, we alluded to the relevance of uncertainty, based on relative frequencies, for evaluating the importance of maintaining a given phonological contrast in the L1, but this is only one aspect of the probabilistic knowledge that bilinguals may have about their L1. A second challenge is to understand how higher-level covariates may shape the influence of frequency in perceptual attrition. For example, it has been suggested that differences in frequency between phonemes reflect differences in their perceptual robustness—that is, perceptual distinctiveness, or (low) confusability with other phonemes (see Wedel and Winter 2016)—which raises interesting questions for the interpretation of frequency effects in perceptual attrition and speech perception more generally. For instance, if frequency effects at the level of phonemes reflect, at least in part, facts about their language-general physical and auditory characteristics, then to what extent might frequency effects in perception be simply epiphenomenal of these language-general characteristics? The current study cannot address this question, but future work on bilingual speech perception, crossing a range of languages as L1 and L2, has the potential to shed further light on language-general perceptual biases and the ways in which frequency effects may overlap with and depart from these biases.
Crossing a range of languages as L1 and L2 may also provide a useful approach for further testing potential interactions between L1 frequency and crosslinguistic (L1–L2) phonological similarity in perceptual attrition. For example, given the same set of L1 contrasts, examining bilingual speakers of that L1 with different L2s—in particular, L2s with different phoneme inventories—would set up a comparison among different patterns of alignment with the target L1 contrasts that could provide evidence for or against the role of frequency in perceptual attrition. In particular, we would expect the general pattern of perceptual attrition to differ across L1 listeners with different L2s if frequency is subordinate to crosslinguistic similarity, but to look similar across L1 listeners with different L2s—and inversely correlated with relative frequencies—if frequency plays the primary role. In short, we consider the current study only the beginning of research on frequency effects in perceptual attrition and see many avenues for future research in this area.

Supplementary Materials

Materials for Ahn et al. (2017) are available on the OSF at https://osf.io/g4c7z/: the language background questionnaire and the Korean listening proficiency test. The dataset for Ahn et al. (2017) is available at https://osf.io/b2478/. The by-word frequency data gathered in the current study are available at https://osf.io/akm3h/.

Author Contributions

Conceptualization, C.B.C. and S.A.; methodology, C.B.C. and S.A.; software, C.B.C. and S.A.; validation, C.B.C.; formal analysis, C.B.C.; investigation, C.B.C. and S.A.; resources, S.A.; data curation, C.B.C.; writing—original draft preparation, C.B.C.; writing—review and editing, C.B.C. and S.A.; visualization, C.B.C.; supervision, S.A.; project administration, S.A.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. Internal funding for Ahn et al. (2017) was received from the University of Maryland.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and the research in Ahn et al. (2017) was approved by the Institutional Review Board of the University of Maryland (protocol code 10-0284, approval date 6 April 2011).

Informed Consent Statement

Informed consent was obtained from all subjects involved in Ahn et al. (2017). Data were not collected from human subjects in the current study.

Data Availability Statement

The data presented in this study are openly available on the OSF at https://osf.io/akm3h/.

Acknowledgments

The authors gratefully acknowledge Weonhee Yun’s advice on Korean corpora and helpful comments and feedback from two anonymous reviewers and the special issue editors.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Note

1
An aspirated [kh] can, of course, reflect a phonemic /kh/ as well, but the phoneme /kh/ virtually always occurs as a syllable onset in Korean, meaning that there are no words with a first syllable containing coda /kh/ (e.g., /nakh/) to factor into the frequency counts.

References

  1. Ahn, Sunyoung, Charles B. Chang, Robert DeKeyser, and Sunyoung Lee-Ellis. 2017. Age effects in first language attrition: Speech perception by Korean-English bilinguals. Language Learning 67: 694–733. [Google Scholar] [CrossRef]
  2. Baechler, Raffaela, and Simon Pröll. 2019. Analyzing language change through a formalist framework. In Morphological Variation: Theoretical and Empirical Perspectives. Edited by Antje Dammel and Oliver Schallert. Amsterdam: John Benjamins, pp. 63–94. [Google Scholar]
  3. Baese-Berk, Melissa, and Matthew Goldrick. 2009. Mechanisms of interaction in speech production. Language and Cognitive Processes 24: 527–54. [Google Scholar] [CrossRef] [PubMed]
  4. Baker, Rachel E., and Ann R. Bradlow. 2009. Variability in word duration as a function of probability, speech style, and prosody. Language and Speech 52: 391–413. [Google Scholar] [CrossRef] [PubMed]
  5. Baus, Cristina, Albert Costa, and Manuel Carreiras. 2008. Neighbourhood density and frequency effects in speech production: A case for interactivity. Language and Cognitive Processes 23: 866–88. [Google Scholar] [CrossRef]
  6. Beganović, Jasminka. 2006. First Language Attrition and Syntactic Subjects: A Study of Serbian, Croatian, and Bosnian Intermediate and Advanced Speakers of Dutch. Master’s thesis, University of Edinburgh, Edinburgh, UK. [Google Scholar]
  7. Brysbaert, Marc, Paweł Mandera, and Emmanuel Keuleers. 2018. The word frequency effect in word processing: An updated review. Current Directions in Psychological Science 27: 45–50. [Google Scholar] [CrossRef]
  8. Bybee, Joan L. 2001. Phonology and Language Use. Cambridge: Cambridge University Press. [Google Scholar]
  9. Bybee, Joan L. 2006. Frequency of Use and the Organization of Language. New York: Oxford University Press. [Google Scholar]
  10. Caramazza, Alfonso, Albert Costa, Michele Miozzo, and Yanchao Bi. 2001. The specific-word frequency effect: Implications for the representation of homophones in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition 27: 1430–50. [Google Scholar] [CrossRef]
  11. Chang, Charles B. 2013. The production and perception of coronal fricatives in Seoul Korean: The case for a fourth laryngeal category. Korean Linguistics 15: 7–49. [Google Scholar] [CrossRef]
  12. Chang, Charles B. 2021. Phonetics and phonology of heritage languages. In The Cambridge Handbook of Heritage Languages and Linguistics. Edited by Silvina Montrul and Maria Polinsky. Cambridge: Cambridge University Press, pp. 581–612. [Google Scholar]
  13. Cho, Won Ik, Sangwhan Moon, and Youngsook Song. 2020. Open Korean corpora: A practical report. In Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS). Association for Computational Linguistics: pp. 85–93. Available online: https://aclanthology.org/2020.nlposs-1.12/ (accessed on 27 January 2023).
  14. Cholin, Joana, Willem J. M. Levelt, and Niels O. Schiller. 2006. Effects of syllable frequency in speech production. Cognition 99: 205–35. [Google Scholar] [CrossRef]
  15. Chomsky, Noam. 1986. Knowledge of Language: Its Origins, Nature, and Use. New York: Praeger. [Google Scholar]
  16. Connine, Cynthia M., Debra Titone, and Jian Wang. 1993. Auditory word recognition: Extrinsic and intrinsic effects of word frequency. Journal of Experimental Psychology: Learning, Memory, and Cognition 19: 81–94. [Google Scholar] [CrossRef]
  17. Croft, William. 2001. Radical Construction Grammar: Syntactic Theory in Typological Perspective. Oxford: Oxford University Press. [Google Scholar]
  18. Cuetos, Fernando, Patrick Bonin, José Ramón Alameda, and Alfonso Caramazza. 2010. The specific-word frequency effect in speech production: Evidence from Spanish and French. The Quarterly Journal of Experimental Psychology 63: 750–71. [Google Scholar] [CrossRef]
  19. de Bot, Kees, and Michael Clyne. 1994. A 16-year longitudinal study of language attrition in Dutch immigrants in Australia. Journal of Multilingual and Multicultural Development 15: 17–28. [Google Scholar] [CrossRef]
  20. de Leeuw, Esther, and Charles B. Chang. 2023. Phonetic and phonological L1 attrition and drift in bilingual speech. In The Cambridge Handbook of Bilingual Phonetics and Phonology. Edited by Mark Amengual. Cambridge: Cambridge University Press, under review. [Google Scholar]
  21. Dufour, Sophie, Angèle Brunellière, and Ulrich H. Frauenfelder. 2013. Tracking the time course of word-frequency effects in auditory word recognition with event-related potentials. Cognitive Science 34: 489–507. [Google Scholar] [CrossRef]
  22. Ellis, Nick C. 2002. Frequency effects in language processing: A review with implications for theories of implicit and explicit language acquisition. Studies in Second Language Acquisition 24: 143–88. [Google Scholar] [CrossRef]
  23. Ellis, Nick C. 2013. Frequency effects. In The Routledge Encyclopedia of Second Language Acquisition. Edited by Peter Robinson. New York: Taylor & Francis, pp. 260–65. [Google Scholar]
  24. Gahl, Susanne, Yao Yao, and Keith Johnson. 2012. Why reduce? Phonological neighborhood density and phonetic reduction in spontaneous speech. Journal of Memory and Language 66: 789–806. [Google Scholar] [CrossRef]
  25. Goldinger, Stephen D., Paul A. Luce, and David B. Pisoni. 1989. Priming lexical neighbors of spoken words: Effects of competition and inhibition. Journal of Memory and Language 28: 501–18. [Google Scholar] [CrossRef]
  26. Goldrick, Matthew, Charlotte Vaughn, and Amanda Murphy. 2013. The effects of lexical neighbors on stop consonant articulation. Journal of the Acoustical Society of America 134: EL172–EL177. [Google Scholar] [CrossRef]
  27. Gong, Jian, María Luisa García Lecumberri, and Martin Cooke. 2016. Can intensive exposure to foreign language sounds affect the perception of native sounds? In Proceedings of Interspeech 2016. Edited by Nelson Morgan, Panayiotis Georgiou, Shrikanth S. Narayanan and Florian Metze. Adelaide: Casual Productions Pty Ltd., pp. 883–87. [Google Scholar]
  28. Gor, Kira, Svetlana Cook, Denisa Bordag, Anna Chrabaszcz, and Andreas Opitz. 2021. Fuzzy lexical representations in adult second language speakers. Frontiers in Psychology 12: 732030. [Google Scholar] [CrossRef]
  29. Halliday, Michael A. K., and Christian M. I. M. Matthiessen. 2004. Halliday’s Introduction to Functional Grammar, 4th ed. London: Routledge. [Google Scholar]
  30. Jang, Hyejin, and Jiyoung Shin. 2006. An acoustic study on the generational difference of the monophthongs in the Daegu dialect. Journal of the Phonetic Society of Korea 57: 15–30. [Google Scholar]
  31. Jaspaert, Koen, and Sjaak Kroon. 1989. Social determinants of language loss. International Journal of Applied Linguistics 83–84: 75–98. [Google Scholar] [CrossRef]
  32. Jescheniak, Jörg D., and Willem J. M. Levelt. 1994. Word frequency effects in speech production: Retrieval of syntactic information and of phonological form. Journal of Experimental Psychology: Learning, Memory, and Cognition 20: 824–43. [Google Scholar] [CrossRef]
  33. Johnson, Keith. 1997. Speech perception without speaker normalization: An exemplar model. In Talker Variability in Speech Processing. Edited by Keith Johnson and John W. Mullennix. San Diego: Academic Press, pp. 145–65. [Google Scholar]
  34. Kellogg, Jackson, and Charles B. Chang. 2023. Exploring the onset of phonetic drift in voice onset time perception. Languages, under review. [Google Scholar]
  35. Ko, Eon-Suk, Jinyoung Jo, Kyung-Woon On, and Byoung-Tak Zhang. 2020. Introducing the Ko corpus of Korean mother–child interaction. Frontiers in Psychology 11: 602623. [Google Scholar] [CrossRef] [PubMed]
  36. Laganaro, Marina. 2005. Syllable frequency effect in speech production: Evidence from aphasia. Journal of Neurolinguistics 18: 221–35. [Google Scholar] [CrossRef]
  37. Laganaro, Marina, and F.-Xavier Alario. 2006. On the locus of the syllable frequency effect in speech production. Journal of Memory and Language 55: 178–96. [Google Scholar] [CrossRef]
  38. Lee-Ellis, Sunyoung. 2012. Looking into Bilingualism through the Heritage Speaker’s Mind. Doctoral thesis, University of Maryland, College Park, MD, USA. [Google Scholar]
  39. Levitt, Andrea G., and Alice F. Healy. 1985. The roles of phoneme frequency, similarity, and availability in the experimental elicitation of speech errors. Journal of Memory and Language 24: 717–33. [Google Scholar] [CrossRef]
  40. Lukyanchenko, Anna, and Kira Gor. 2011. Perceptual correlates of phonological representations in heritage speakers and L2 learners. In Proceedings of the 35th Annual Boston University Conference on Language Development. Edited by Nick Danis, Kate Mesh and Hyunsuk Sung. Somerville: Cascadilla Press, pp. 414–26. [Google Scholar]
  41. Marslen-Wilson, William D., and Alan Welsh. 1978. Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology 10: 29–63. [Google Scholar] [CrossRef]
  42. Meunier, Fanny, and Juan Segui. 1999. Frequency effects in auditory word recognition: The case of suffixed words. Journal of Memory and Language 41: 327–44. [Google Scholar] [CrossRef]
  43. Montrul, Silvina A. 2008. Incomplete Acquisition in Bilingualism: Re-examining the Age Factor. Amsterdam: John Benjamins. [Google Scholar]
  44. Munson, Benjamin. 2001. Phonological pattern frequency and speech production in adults and children. Journal of Speech, Language, and Hearing Research 44: 778–92. [Google Scholar] [CrossRef]
  45. National Institute of Korean Language. 2005. A Speech Corpus of Reading-Style Standard Korean. Seoul: National Institute of Korean Language. [Google Scholar]
  46. Navarrete, Eduardo, Benedetta Basagni, F.-Xavier Alario, and Albert Costa. 2006. Does word frequency affect lexical selection in speech production? The Quarterly Journal of Experimental Psychology 59: 1681–90. [Google Scholar] [CrossRef]
  47. Nelson, Noah Richard, and Andrew Wedel. 2017. The phonetic specificity of competition: Contrastive hyperarticulation of voice onset time in conversational English. Journal of Phonetics 64: 51–70. [Google Scholar] [CrossRef]
  48. Pierrehumbert, Janet. 2001. Exemplar dynamics: Word frequency, lenition and contrast. In Frequency Effects and the Emergence of Lexical Structure. Edited by Joan Bybee and Paul Hopper. Amsterdam: John Benjamins, pp. 137–57. [Google Scholar]
  49. Politzer-Ahles, Stephen, Ka Keung Lee, and Lue Shen. 2020. Ganong effects for frequency may not be robust. The Journal of the Acoustical Society of America 147: EL37–EL42. [Google Scholar] [CrossRef]
  50. Robson, Jo, Tim Pring, Jane Marshall, and Shula Chiat. 2003. Phoneme frequency effects in jargon aphasia: A phonological investigation of nonword errors. Brain and Language 85: 109–24. [Google Scholar] [CrossRef]
  51. Samuel, Arthur G. 2011. Speech perception. Annual Review of Psychology 62: 49–72. [Google Scholar] [CrossRef]
  52. Schmid, Monika S. 2007. The role of L1 use for L1 attrition. In Language Attrition: Theoretical Perspectives. Edited by Barbara Köpke, Monika S. Schmid, Merel Keijzer and Susan Dostert. Amsterdam: John Benjamins Publishing, pp. 135–53. [Google Scholar]
  53. Schmid, Monika S. 2011. Language Attrition. Cambridge: Cambridge University Press. [Google Scholar]
  54. Segui, Juan, Jacques Mehler, Uli Frauenfelder, and John Morton. 1982. The word frequency effect and lexical access. Neuropsychologia 20: 615–27. [Google Scholar] [CrossRef]
  55. Slowiaczek, Louisa M., Howard C. Nusbaum, and David B. Pisoni. 1987. Phonological priming in auditory word recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition 13: 64–75. [Google Scholar] [CrossRef]
  56. Tang, Kevin, and Brent de Chene. 2014. A new corpus of colloquial Korean and its applications. Paper presented at the 14th Conference on Laboratory Phonology (LabPhon 14), Tokyo, Japan, July 25–27. [Google Scholar]
  57. Tice, Marisa, and Melinda Woodley. 2012. Paguettes and bastries: Novice French learners show shifts in native phoneme boundaries. UC Berkeley Phonology Lab Annual Report 8: 72–75. [Google Scholar] [CrossRef]
  58. Todd, Simon, Janet B. Pierrehumbert, and Jennifer Hay. 2019. Word frequency effects in sound change as a consequence of perceptual asymmetries: An exemplar-based model. Cognition 185: 1–20. [Google Scholar] [CrossRef]
  59. Vitevitch, Michael S., and Paul A. Luce. 2016. Phonological neighborhood effects in spoken word perception and production. Annual Review of Linguistics 2: 75–94. [Google Scholar] [CrossRef]
  60. Warner, Natasha, Roel Smits, James M. McQueen, and Anne Cutler. 2005. Phonological and statistical effects on timing of speech perception: Insights from a database of Dutch diphone perception. Speech Communication 46: 53–72. [Google Scholar] [CrossRef]
  61. Wedel, Andrew, Abby Kaplan, and Scott Jackson. 2013. High functional load inhibits phonological contrast loss: A corpus study. Cognition 128: 179–86. [Google Scholar] [CrossRef]
  62. Wedel, Andrew, and Bodo Winter. 2016. Languages prefer robust phonemes. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11). Edited by Seán G. Roberts, Christine Cuskley, Luke McCrohon, Lluís Barceló-Coblijn, Olga Fehér and Tessa Verhoef. Available online: http://evolang.org/neworleans/papers/28.html (accessed on 27 January 2023).
  63. Wedel, Andrew, Scott Jackson, and Abby Kaplan. 2013. Functional load and the lexicon: Evidence that syntactic category and frequency relationships in minimal lemma pairs predict the loss of phoneme contrasts in language change. Language and Speech 56: 395–417. [Google Scholar] [CrossRef]
  64. Yoneyama, Kiyoko, Keith Johnson, and Reiko Kataoka. 2011. An effect of phoneme frequency on stop place perception by English-speaking and Japanese-speaking listeners. The Journal of the Acoustical Society of America 129: 2418. [Google Scholar] [CrossRef]
  65. Yoon, Tae-Jin, and Yoonjung Kang. 2014. Monophthong analysis on a large-scale speech corpus of read-style Korean. Phonetics and Speech Sciences 6: 139–45. [Google Scholar] [CrossRef]
  66. Yun, Weonhee, Kyuchul Yoon, Sunwoo Park, Juhee Lee, Sungmoon Cho, Ducksoo Kang, Koonhyuk Byun, Hyeseung Hahn, and Jungsun Kim. 2015. The Korean corpus of spontaneous speech. Journal of the Korean Society of Speech Sciences 7: 103–9. [Google Scholar] [CrossRef]
Table 1. Data from the NIKL corpus on words with the same first syllable as the stimulus items in Ahn et al. (2017). Data on words with a similar first syllable that includes a coda /k/ are included in parentheses.
Table 1. Data from the NIKL corpus on words with the same first syllable as the stimulus items in Ahn et al. (2017). Data on words with a similar first syllable that includes a coda /k/ are included in parentheses.
ContrastSyllableNumber of WordsOverall FrequencySpoken Frequency
/n/-/l//na/52 (9)22,626 (184)1115 (1)
/la/10 (0)387 (0)14 (0)
/t/-/t*//ta/61 (3)11,731 (102)495 (0)
/t*a/20 (6)4363 (269)41 (88)
/s/-/s*//sa/193 (1)25,812 (10)701 (0)
/s*a/8 (2)668 (76)22 (6)
Table 2. Data from the NIKL corpus on spoken frequency asymmetries between words with the same first syllable as stimulus items in Ahn et al. (2017). Data including words with a similar first syllable containing a coda /k/ are included in parentheses. The frequency share of the first (C1) and second (C2) member of the contrast indicates the percentage of the total frequency of the contrast represented by that member.
Table 2. Data from the NIKL corpus on spoken frequency asymmetries between words with the same first syllable as stimulus items in Ahn et al. (2017). Data including words with a similar first syllable containing a coda /k/ are included in parentheses. The frequency share of the first (C1) and second (C2) member of the contrast indicates the percentage of the total frequency of the contrast represented by that member.
ContrastFrequency AsymmetryFrequency Share, C1 (%)Frequency Share, C2 (%)
/n/-/l/1101 (1102)98.8 (98.8)1.2 (1.2)
/t/-/t*/454 (366)92.4 (79.3)7.6 (20.7)
/s/-/s*/679 (673)97.0 (96.2)3.0 (3.8)
Table 3. Data from the SUBTLEX-KR corpus on orthographic words with the same first syllable as the stimulus items in Ahn et al. (2017). Data on orthographic words with a similar first syllable that includes a coda /k/ are included in parentheses.
Table 3. Data from the SUBTLEX-KR corpus on orthographic words with the same first syllable as the stimulus items in Ahn et al. (2017). Data on orthographic words with a similar first syllable that includes a coda /k/ are included in parentheses.
ContrastSyllableNumber of Orthographic WordsSpoken Frequency
/n/-/l//na/8441 (583)1,167,106 (9648)
/la/2970 (125)107,857 (1136)
/t/-/t*//ta/6424 (236)856,500 (37,665)
/t*a/2156 (247)116,492 (22,422)
/s/-/s*//sa/14,799 (181)1,230,198 (2986)
/s*a/1428 (56)71,956 (2473)
Table 4. Data from the SUBTLEX-KR corpus on frequency asymmetries between orthographic words with the same first syllable as stimulus items in Ahn et al. (2017). Data including orthographic words with a similar first syllable containing a coda /k/ are included in parentheses. The frequency share of the first (C1) and second (C2) member of the contrast indicates the percentage of the total frequency of the contrast represented by that member.
Table 4. Data from the SUBTLEX-KR corpus on frequency asymmetries between orthographic words with the same first syllable as stimulus items in Ahn et al. (2017). Data including orthographic words with a similar first syllable containing a coda /k/ are included in parentheses. The frequency share of the first (C1) and second (C2) member of the contrast indicates the percentage of the total frequency of the contrast represented by that member.
ContrastFrequency AsymmetryFrequency Share, C1 (%)Frequency Share, C2 (%)
/n/-/l/1,059,249 (1,067,761)91.5 (91.5)8.5 (8.5)
/t/-/t*/740,008 (755,251)88.0 (86.6)12.0 (13.4)
/s/-/s*/1,158,242 (1,158,755)94.5 (94.3)5.5 (5.7)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chang, C.B.; Ahn, S. Examining the Role of Phoneme Frequency in First Language Perceptual Attrition. Languages 2023, 8, 53. https://doi.org/10.3390/languages8010053

AMA Style

Chang CB, Ahn S. Examining the Role of Phoneme Frequency in First Language Perceptual Attrition. Languages. 2023; 8(1):53. https://doi.org/10.3390/languages8010053

Chicago/Turabian Style

Chang, Charles B., and Sunyoung Ahn. 2023. "Examining the Role of Phoneme Frequency in First Language Perceptual Attrition" Languages 8, no. 1: 53. https://doi.org/10.3390/languages8010053

APA Style

Chang, C. B., & Ahn, S. (2023). Examining the Role of Phoneme Frequency in First Language Perceptual Attrition. Languages, 8(1), 53. https://doi.org/10.3390/languages8010053

Article Metrics

Back to TopTop