Instructed second language acquisition (ISLA) research is typically conducted in Western societies and focuses largely on educated, highly literate, middle-class language learners (Ortega 2019
). In fact, very few ISLA studies have been carried out with students with limited or interrupted formal education (SLIFE), even though Bigelow and Tarone
) have long called for a more systematic integration of this population so as to better understand the impact of first language (L1) literacy skills on formal acquisition of additional languages (Lx) (see also, Young-Scholten 2013
), and even though amount and types of literacy experiences occupy an important place in Hulstijn’s
) individual differences framework of Lx learning. Of the limited classroom studies conducted with SLIFE (e.g., Strube et al. 2013
), none, to our knowledge, focus on the teaching and learning of listening skills; however, listening skills are prevalent and relevant for successful social interactions (Wolvin 2018
) as well as for the development of other language skills (Vandergrift 2008
), such as literacy development (Strube et al. 2013
). This lack of attention to the development of listening skills is not entirely surprising since classroom-based research on Lx listening instruction is, to date, relatively scarce (Vandergrift and Cross 2017
Learning to listen in an Lx is difficult due to the complex orchestration of the many processes involved (Graham 2017
). When engaged in the listening process, listeners must skilfully and simultaneously process the linguistic, pragmatic, semantic, contextual and background information communicated, explicitly or implicitly, in the message (Rost 2011
). Listening in an Lx is all the more challenging because of the ephemeral nature of spoken language. Indeed, the listener does not have the option of reviewing all the information present in the input and has little control over the rate of speech and comprehensibility of their interlocutor (Vandergrift 2006
). When successful processing of all these elements is not impeded, a coherent representation of the message is formed and comprehension may occur (Wolvin 2018
According to Graham and Macaro
(2008, p. 750
), the following set of listening strategies (i.e., conscious plans to manage incoming speech (Rost 2011
)) are generally associated with successful listening outcomes: (1) making predictions about the likely content of a passage; (2) selectively attending to certain aspects of the passage (i.e., deciding to “listen out for” particular words or phrases or idea units); (3) monitoring and evaluating comprehension (i.e., checking that one is in fact understanding or has made the correct interpretation); and (4) using a variety of clues (linguistic, contextual and background knowledge) to infer the meaning of unknown words. There is evidence to suggest that more successful Lx listeners naturally deploy these listening strategies when faced with a listening task, while less skilled Lx listeners do not, unless they are explicitly (Cross 2010
; Diaz 2015
; Graham et al. 2008
; Seo 2005
) or implicitly trained to do so (Tafaghodtari and Vandergrift 2008
; Vandergrift and Tafaghodtari 2010
). Although these classroom interventions differed in terms of number and type of listening strategies (explicitly or implicitly) taught to Lx listeners, they all used elements of metacognition: development of the ability to recognise the mental processes involved in listening comprehension, the ability to examine the (social, cognitive and affective) factors that impede, slow down or facilitate listening, the ability to identify what contributes or hinders sound or word perception, or the ability to choose strategies that foster overall listening development and comprehension (see Vandergrift and Goh 2012
). In other words, these interventions all aimed at enabling learners to take ownership of the listening process.
Despite the growing popularity of teaching listening strategies and metacognition to facilitate and maximise listening comprehension, Lx learners generally face challenges when it comes to regulating their listening behaviours (Vandergrift and Goh 2012
). Even when they have access to a significant number of strategies, they still struggle to use them effectively, to alternate between them when needed, to review their interpretations when problems occur, and to use their strategies in naturalistic settings (Vandergrift and Goh 2012
). Although these findings emerged from studies carried out with literate Lx learners, we can assume that the same findings may well apply to SLIFE as these learners are not used to reflecting on their actions and thoughts regarding how they learn (DeCapua and Marshall 2015
In addition, because of the effects of schooling and/or literacy on cognitive function and organisation (Huettig and Mishra 2014
), SLIFE, who are taking their first steps into literacy education, often cannot reach an abstraction level allowing them to think about language as an object of analysis (Bigelow and Vinogradov 2011
). In addition, since they struggle to engage in learning activities that do not reflect their life experiences or needs (Huettig and Mishra 2014
), the use of explicit strategy training (e.g., explicit teaching about how to monitor and review their interpretations of the message, naming abstract concepts of cognition and linguistic forms) appeared unsuitable for this population. Hence, we decided to carry out a partial replication of the implicit intervention designed by Vandergrift and Tafaghodtari
), which focuses on awareness of the listening process (i.e., the intervention encouraged learners to formulate and validate their hypotheses, to identify and solve the listening problems they encountered, and to formulate goals for future listening tasks, without explicitly teaching them to systematically follow these steps). Indeed, the possibility of learning implicitly while listening, as opposed to being explicitly told how and when to deploy their attentional and processing resources, seemed better suited for SLIFE not only because they thrive in experiential, contextually-embedded learning rather than in formal, decontextualised learning (Keller 2017
; Moore 1999
), but also because this type of intervention of yielded significant listening gains for less-skilled listeners, but not for more-skilled listeners in Vandergrift and Tafaghodtari’s
The present study thus sought to fill an important gap in the literature by examining the effects of an implicit listening strategies training, adapted from Vandergrift and Tafaghodtari
), on the listening comprehension skills of SLIFE. In particular, this small-scale study addressed the following research question: Does implicit teaching of listening strategies and metacognition help foster oral comprehension of adults with low levels of education who are learning French Lx?
3.1. Listening Performance on the Quantitative Measure: Pre, Post and Delayed Tests Scores
To answer our research question, we attributed one point for each correct answer on the test (maximum possible score = 8). We then calculated the mean scores (and standard deviations) on the pretest, posttest and delayed posttest for both the experimental groups and the control group, that we present in the form of success rate (in %) in Table 2
. The results show that the three groups of participants were not comparable at the beginning of the intervention and that their listening performance evolved differently over time. It appears that the trajectories of the three groups are very different, regardless of whether they had participated in an implicit listening strategies training intervention or not. Data obtained from the listening tests thus show that participants in the experimental groups did not make long-term gains in listening performance as a result of the intervention.
After ensuring that assumptions of homoscedasticity and normal distribution of variance were met, we conducted a four-factor repeated measures ANOVA (group of participants, sequence, time and tests). The test results indicate an overall effect of the group (F (2, 23.2) = 14.55, p < 0.0001), however, no effect for sequence, time or test were found. This indicates that there was a significant difference in the three groups’ performance on the tests regardless of sequence, time, or test.
Further descriptive analyses were conducted to determine whether participants’ listening performance differed between items that were explicitly (easier) or implicitly (more difficult) stated in the vlog excerpt. Table 3
reveals once again that the groups were not comparable at the beginning of the intervention and that their listening performance followed different trajectories over time.
In addition, Table 3
shows that, at all three testing times, all groups were more successful at correctly answering questions addressing information explicitly stated in the vlogs. A repeated-measure ANOVA, conducted after verifying that the data adhere to the assumptions of homoscedasticity and normal distribution of variance, confirmed a main effect of question type (F (1, 46.5) = 123.78, p
3.2. Listening Performance on the Qualitative Measure: Scores on First and Last Implicit Listening Strategies Training
Each implicit strategies training session with pairs of participants from the two experimental groups was recorded. Verbalisations of the 17 participants who took part in all five intervention sessions were analysed in terms of participants’ ability to identify, retell and make connections between key elements from the vlog excerpt. Table 4
shows participants’ scores for the first (TI) and last (T5) training sessions and offers a different perspective on participants’ listening performance.
Overall, the changes observed from T1 to T5 can be seen in terms of: (1) better ability to verbalise key content of the vlog (from − to +, as for A04, A06, A12 and B01); (2) ability to establish better connections between different elements of the vlog (from lower to higher score on 4, as for B11, B09, A14 and B05); or (3) a combination of better ability to verbalise key content of the vlog and better ability to establish better connections between different elements of the vlog (for A01). Overall, participants remained at the same level or made gains on either the ability to discuss key concepts from the vlog or the ability to make more elaborate connexions between different elements of the vlog. These results are in sharp contrast with those obtained on the listening comprehension tests discussed above.
This discrepancy in listening performance measured quantitatively and qualitatively is evident with participant B09. While his listening performance on the three listening comprehension tests remained stable over time (score of 4/8 at T1, T2 and T3), qualitative observations of his listening performance showed great changes from T1 (score = 2+) to T5 (score = 4+). At T1, B09′s verbalisations (see Example 1) showed repetition of units that included keywords (partement
, “apartment”; beaucoup cher
, “very expensive”) with no connections, and were therefore associated with a score of 2+6
|1.||B09||Elle dit: le: partement (.) trop (.)↘grand↘|
“She says: the: apartment (.) too (.) big↘”
| ||B09||Beaucoup cher (.)|
“Very expensive (.)”
A closer look at B09′s verbalisations in terms of explicit and implicit information shows that the only relevant information he identified (about the apartment being expensive) was stated explicitly by the vlogger.
At T5, B09′s verbalisations were generally characteristic of level 4+7
. He first mentioned the majority of the explicitly stated key elements from the excerpt, then, when prompted to summarise the vlog, was able to connect these elements in a narrative-like sequence (see Example 2)
|2.||Hum (.) De: Prome(.) promener dans quartier: je pense que (.) elle a cherché↗ l’appartement: Chien (.) Chien (.) [euh] Son chien↗ accepté: ou non↘|
| ||“Hum (.) to: wa(.) walk in the neighbourhoo:d I think that (.) she looked for↗ the apartment: dog(.) dog (.) [uh] her dog↗ accepte:d or not↘”|
B09 continues to add to his summary and it is here that we see evidence of implicit information processing when he mentions where he thinks the vlogger lives (see Example 3). Information about the vlogger living in Montreal was not explicitly addressed in the vlog (i.e., at the beginning of the excerpt the vlogger mentions she is looking for a new apartment, but she wants to stay in the same neighbourhood. At the end of the excerpt, she asks for help finding an apartment in Montreal. She thus never explicitly says she lives in Montreal).
|3.||[Euh] Je pense que elle déjà: ↗ (.) habite Montréal↘|
| ||“[uh] I think that she already: ↗ (.) lives in Montreal↘”|
It thus seems that the qualitative data capture the participants’ listening comprehension competence differently than the quantitative measure. These results are discussed in the next section.
4. Discussion and Conclusions
The present small-scale study sought to answer the following research question: Does implicit teaching of listening strategies and metacognition help foster oral comprehension of adults with low levels of education who are learning French Lx? Two types of data were gathered: quantitative data were collected through listening comprehension tests used before and after the intervention and qualitative data were obtained by analysing participants’ verbalisations during the first and last implicit strategies training sessions.
Results on the listening comprehension tests showed that the three groups were not comparable throughout the study and that no effect of the experimental treatment on the participants’ listening performance could be observed. The variation observed over time is however difficult to interpret, as the experimental groups did not perform similarly, and the control group’s performance was similar to that of Group A. In addition, other linguistic variables (i.e., lexical and grammatical knowledge) that were not controlled for might have influenced participants’ performance. Another factor that needs to be acknowledged is the possible teacher effect: since the three groups had different teachers, and since we did not monitor what was taught in class, teachers might have influenced students’ performance. At this point, although the less-skilled listeners in our study (Group A) did seem to benefit more from the intervention, it is thus difficult to conclude on positive effects of the experimental treatment, unlike Vandergrift and Tafaghodtari’s
) original study in which literate adolescent learners of French made clear progress in their listening journals: the positive effect we measured was not significant and could not apply to all participants, who were generally poor listeners. This weaker effect is not surprising, considering the extent to which the original procedure was adapted to create learning conditions better suited for SLIFE. Our participants thus had much fewer opportunities to explain and reflect on the decisions they made while listening, impacting the degree to which they could actually act on their listening processes.
This finding must, however, be interpreted with caution, as we must keep in mind the characteristics of our participants. It is indeed possible that some factors hindered the true manifestation of the participants’ competence on the listening comprehension tests. First, the format of the test (paper and pencil), of the questions (multiple-choice) and their layout (2D black and white pictograms) might have played a role. According to Huettig and Mishra
), SLIFE underperform when 2D black and white images are used, compared to when they see real colour pictures: participants might have misinterpreted these abstract representations. The language and cultural expectations surrounding an assessment situation might also have been fairly unfamiliar to our participants (Loring 2017
), who might have underperformed in this context. Poor performance of SLIFE in our detail-oriented questionnaires is also consistent with Henrich et al.
) findings suggesting that most cultures around the world favour holistic reasoning instead of analytic reasoning (i.e., participants might have understood the key and global ideas in the excerpts, but paid little to no attention to the details targeted in the questionnaires). All of these elements underscore the poor ecological validity of performing a decontextualised task such as answering a questionnaire for SLIFE. Thus, the groups’ performance on the tests might be attributed to a combination of these factors, minimising the importance of comprehension per se in the measure.
Nevertheless, before reaching hasty conclusions about the ineffectiveness of the treatment, we must turn to the qualitative data obtained from the 17 participants from Group A and Group B, which offer a more nuanced perspective. Indeed, while eight participants remained at the same level throughout the study, the remaining ones made gains either on the ability to discuss concepts actually related to the content of the vlog (n = 4) or the ability to make more elaborate connections between different elements of the vlog excerpt (n = 4), or both (n = 1). Even though we do not have access to the same data from the participants in the control group and therefore cannot attribute those changes observed to the experimental treatment, we believe that these qualitative data captured more finely how participants’ listening performance unfolds, although these results might also not truly reflect the participants’ actual comprehension. In this regard, different limitations can be identified.
First, building on Young-Scholten and Strom’s
) and Tarone’s
) findings that adult non-readers are able to repeat content words, participants might have successfully been able to reproduce a string of phonemes forming a word or a phrase associated with a key element in the vlog without actually having understood its meaning or importance. Second, the content of the participants’ verbalisations may have been affected by their linguistic abilities (i.e., their grammatical and lexical knowledge, which were not controlled for), their retrieval capacities or the degree to which one is detailed-oriented or talkative: some may have understood more than they actually said and others may have established more sophisticated connections between different segments of the vlog than they expressed. In addition, to our knowledge, our study is the first to have used a free recall protocol with SLIFE. Our adaptation of Kim’s
) coding scheme to assess participants’ ability to successfully retell the content of a vlog is still at an exploratory stage. Because of our participants’ limited production skills, we relied exclusively on the phonological encoding of our participants’ verbalisations to assess their ability to make more or less sophisticated connections between words and/or phrases they understood. The first author’s field notes indicate that other cues such as gestures or gaze might have provided further evidence of ongoing language processing that could have been useful to establish participants’ score. Thus, further research assessing the validity of this adaptation, and SLIFE language processing capacities in general, is desperately needed.
Facing these seemingly inconsistent results from our two data sources, one must bear in mind that our two measures targeted very different listening processes and exerted different cognitive and linguistic demands on the participants. On the one hand, the pen-and-paper oral comprehension tests required the participants to attend to both main and secondary ideas in the vlog, to keep them in working memory, to decode the answer sheets and the questions asked, and to finally select an answer. This testing format was thus very cognitively demanding, requiring participants to continuously shift their memory and attentional resources from the vlog content to the answer sheet. On the other hand, the free recall protocol used during the intervention required the participants to attend primarily to words and phrases they decoded, memorise them and use their linguistic repertoire to verbalise the elements they could retrieve from memory. Similar to the pen-and-paper test, this task imposed important memory and retrieval demands, but only on the content decoded by the participant. In addition, the free recall protocol was more linguistically demanding than the pen-and-paper test: the participants had to draw on their linguistic resources to reveal what they had gathered from the vlogger. Thus, the two data sources do not shed light on the same processes and constraints that mediate listening performance. They thus should be seen as complementary instead of considered as showing inconsistent results. If both measures are adapted so as to lower their memory and linguistic demands and to target key elements from the vlog, they could be used in combination to prepare learners to meet the school’s home program oral comprehension objective at this level (i.e., “Comprendre des informations, les repérer et faire des déductions simples dans une courte vidéo ou un court texte audio et répondre à des questions s’y rapportant”; “Understand, identify and make simple inferences about content of short videos or short audio texts and answer related questions about them”).
Despite the limitations of our instruments and small sample size, the current study has nevertheless responded to the call from Tarone
) to replicate “SLA studies in populations of illiterate and low-literate learners” (p. 77), being the first, to our knowledge, to have addressed implicit listening strategies training for SLIFE, partially replicating Vandergrift and Tafaghodtari
). The specific focus on this understudied population yields important new insights that can assist the field of ISLA to better integrate this population into its research agenda. For example, in light of our findings, it appears that a process-product research design might be better suited to capture SLIFE’s actual Lx skills than a before-and-after design. Indeed, pretest–posttest designs, widely used in ISLA research, rely on the cultural assumptions that: (1) individual performance prevails over group performance; and (2) that performance on one-time decontextualised tasks is meant to display the depth of one’s individual knowledge or skills. However, SLIFE tend to underperform in these conditions; these cultural practices are at odds with their own way of learning and the reasons motiving learning in the first place (DeCapua 2016
). In addition, our findings have also exposed the fact that, even though we know much about SLIFE’s characteristics, very little is known about how they deploy their cognitive and attentional resources when faced with the task of processing authentic oral input. Further research is thus needed to validate measures meant to accurately describe SLIFE’s depth of language processing and awareness, as these are key cognitive factors mediating Lx learning (Hulstijn 2019
). We hope that other ISLA researchers will be inspired to carry out classroom research with SLIFE. More attention to this population will ultimately allow for a more comprehensive assessment of the extent to which general Lx learning theories and hypotheses, based on highly educated and literate Lx learners, are valid and reliable.