Next Article in Journal
Associations among Variables in Technology-Enhanced Phonological Awareness Programmes Based on a Meta-Analysis
Next Article in Special Issue
What Can Eye Movements Tell Us about Reading in a Second Language: A Scoping Review of the Literature
Previous Article in Journal
Socio-Economically Disadvantaged Male Students’ Hesitancy to Study Biology in Ireland: Factors Effecting Intent in the Transition to Upper Secondary School
Previous Article in Special Issue
Direct and Indirect Contributions of Three Aspects of Morphological Knowledge to Second Language Reading Comprehension
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

L2 Reading Assessment from a Sociocultural Theory Perspective: The Contributions of Dynamic Assessment

1
College of Education, Purdue University, West Lafayette, IN 47907, USA
2
Department of Linguistics, University of Victoria, Victoria, BC V8P 5C2, Canada
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(4), 342; https://doi.org/10.3390/educsci14040342
Submission received: 9 February 2024 / Revised: 18 March 2024 / Accepted: 19 March 2024 / Published: 24 March 2024

Abstract

:
Our understanding of assessing L2 reading has significantly expanded in recent years, including both theoretical and practical aspects. There is a growing consensus that reading comprehension involves multiple skills and subskills. Classroom-based assessment practices reflecting such conceptualizations have also become widely utilized. This article explores the Vygotskyan sociocultural theory (SCT) and its implications for L2 reading assessment, with a specific focus on dynamic assessment as an effective classroom-based approach for L2 reading and literacy instruction. We will review the research that has applied DA principles to the assessment and teaching of L2 reading. We conclude by outlining potential avenues for future DA research and L2 reading instruction.

1. Introduction

The question of how to best teach students to read and how to assess their reading abilities has been debated among researchers, teachers, and major stakeholders in first (L1) and second (L2) language learning and teaching. In L1 research, these discussions have been termed the reading wars, characterized by the controversies surrounding reading instruction and assessment over the past 100 years [1]. A recurrent theme in scholarly publications and professional circles is “the science of reading”, a phrase used to represent the increased body of knowledge and understandings about reading instruction, how it develops, and the best practices to teach the skill based on scientific methods [1,2]. As scholars continue to challenge some of the foundational assertions held by proponents of the science of reading, they bring forth fresh perspectives to advance the ongoing discussion (see [3]).
The word Science in the title of this Special Issue in the Education Sciences journal—The Science of Second Language Reading—evokes a parallel association with L1 reading instruction for the reader. Despite the numerous challenges and controversies encountered along the way, there are emerging insights into the development of L2 reading and instructional practices [4,5]. The combined conclusions of reading research in L1 and L2 contexts acknowledge that a multitude of factors, such as language proficiency, cultural background, metacognitive skills, and motivation, as well as strategy awareness and use, influence learners’ reading abilities, and that these factors can have diverse impacts on individual learners (L1 [1,3] and L2 [5,6,7,8,9]).
The growth in research in recent years has given rise to more contemporary conceptualizations of the reading construct. This has not only broadened our understanding of the intricacies involved in reading but has also led to new perspectives on assessment, with a particular emphasis on classroom-based assessment. As observed by Grabe and Jiang [7], traditional reading assessment practices among teachers primarily involved a limited set of tasks and tools, such as multiple-choice and true/false items. The primary concern of these practices was to measure achievement for summative purposes [10]. Recently, an important aspect of classroom-based assessment that has gained momentum is assessment for learning (AfL), where the focus is on using assessment to promote learning, rather than just documenting and measuring it [11,12,13]. AfL encourages maximum feedback on learner performance and highlights student–teacher interactions. This shift also emphasizes the importance of aligning assessment methods with the dynamic nature of reading, taking into account diverse factors such as sociocultural influences, reader engagement, and individual differences in learning styles and preferences.
Building upon the aforementioned context, this paper explores dynamic assessment (DA) as a potent classroom-based approach for assessing the reading abilities of L2 learners. DA involves assessing learning within the interactive moments between teachers and learners. In these interactions, teachers continuously evaluate their students’ understanding of specific content and offer feedback accordingly. As such, DA adjusts feedback according to the individual learners’ knowledge and utilizes their responses to the feedback, painting a detailed picture of their reading abilities and developing appropriate instructional interventions. By its very nature, not only does DA provide feedback, but it also feeds forward into the whole process of teaching and assessing. The insights gained from DA are particularly crucial given the current operational constraints of standardized tests in accurately capturing learners’ proficiency across the diverse skills and subskills that constitute reading [7].
In the subsequent sections of this article, we present current perspectives on the reading construct and discuss the implications in assessing the skill. Following this, we articulate the theoretical principles of DA and delineate how L2 reading research and practice stands to benefit from this approach. Additionally, we present an overview of the scant existing literature on DA applications to L2 reading, suggesting areas for future research.

2. L2 Reading Assessment

Assessing reading is closely linked to how the construct is understood. Traditionally, the dominant view has been that reading is a single unitary construct that is measurable through general tests [7,14,15,16]. There is now consensus that reading comprehension is a multicomponent construct (cf., [17,18]) involving many skills and subskills, such as morphological knowledge, text structure awareness, and various lower-level and higher-level linguistic processes [19,20]. Koda [15], for example, views the multicomponent nature of L2 reading as a “basic fact” that researchers must incorporate in both theory formation and empirical validations. The componential perspective has instigated numerous empirical investigations into the variables that affect L2 learners’ reading comprehension abilities [21,22,23]. In their meta-analysis study, Jeon and Yamashita [22], for example, found that grammar knowledge, vocabulary knowledge, and decoding abilities were the three strongest correlates of L2 reading comprehension. A recurring finding in the literature is that reading comprehension results from the cumulative interaction of the diverse skills and subskills that readers employ during the reading process.
The growing acceptance of the componential view can be attributed to the limitations of the traditional top-down perspectives [19] and the single unitary construct of reading comprehension. The top-down perspectives emphasize “higher level contextual and background knowledge sources, downplaying to a large degree the contribution to reading of basic lower-level visual word recognition processes” [20] (p. 262). The unitary construct falls short of “identify[ing] variations in reading problems or the sources of those problems” [15] (p. 3). This is all while individual idiosyncrasies make learners prone to different reading comprehension challenges. One reason for this major pitfall is that reading test scores are typically reported globally, making it difficult to adequately deal with the variation therein [15,17]. Such unitary test scores mask the relative contributions of various skills and subskills to reading comprehension, hence limiting their diagnostic value.
The above-mentioned advances in re-conceptualizing reading ability have introduced new challenges for assessing the ability. Calls for developing assessment tools to capture the complexity of the multicomponent ability are now stronger than ever. Grabe and Jiang [7], for example, advocate expanding reading measures to reflect the multicomponent nature of reading ability more accurately. One most useful development, the authors note, is to “develop diagnostic assessment measures for L2 reading because much more detailed information could be collected in this way” [7] (p. 13). The need for diagnostic measures is even more warranted for L2 readers in order to diagnose not only language-related issues but also strictly reading challenges [24]. Global reading scores are unlikely to reveal such information because they are too general to identify weaknesses or strengths of L2 readers [17].
The call for developing diagnostic assessments for L2 reading aligns with the major interest in Diagnostic Language Assessment (DLA) in the field of applied linguistics. The three major components of DLA are diagnosis, feedback, and remedial learning [25,26,27]. According to Lee [26], the recent surge of interest in DLA within the language assessment and learning field is closely tied to the growing demand for tailored assessments. These assessments aim to precisely identify the root causes of learners’ challenges in language learning, facilitating effective strategies and providing means for learners and teachers to address these issues. These new learning-focused approaches, Lee adds, highlight “the learning-inducing nature of language assessment in the sense that assessment should be geared toward identifying learning potentials and promoting further learning beyond what the test-takers currently know or are able to do” [26] (p. 300). The aim is to gather detailed diagnostic information from current assessments or develop new tools and procedures tailored for diagnostic purposes.

3. Sociocultural Theory (SCT) and DA

To comprehend DA and its underlying principles, it is necessary to understand sociocultural theory (SCT). SCT has its roots in the works of Vygotsky and his colleagues, with its core tenet asserting that sociocultural and mental activities are intricately interconnected in a mutually dependent, symbolically mediated relationship [28,29]. The interconnection signifies that social interaction is the source of learning and development, rather than being exclusively confined to an individual’s mental faculties and biological endowment [30,31]. The assertion that mental functioning evolves through social interaction is best established in Vygotsky’s general genetic law of cultural development [29,32,33]. According to the principle, “Every function in the child’s cultural development appears twice: first, on the social level, and later, on the individual level; first, between people (interpsychological), and then inside the child (intrapsychological)” [34] (p. 57). Expounding on the principle, Wertsch [35] contends that initial learning unfolds in social interactions and negotiations between novices and experts and/or between novices themselves. This statement is a key concept in the sociocultural theory of development. In essence, Vygotsky emphasizes the crucial role of social interactions and cultural influences in shaping individuals’ cognitive development, highlighting the progression from external social interactions to internalized cognitive processes.
The metaphorical space where interaction between the mediator and learner transpires with the ultimate goal of bringing about self-regulation is the Zone of Proximal Development (ZPD), defined by Vygotsky [34] (p. 86) as “The distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers”. Within this zone, the mediator—typically a teacher—assists the learner when the learner needs help to handle a situation beyond his/her current individual capabilities. The assistance, called mediation in the SCT literature, can take different forms, such as leading questions and hints. Newman, Griffin, and Cole [36] aptly use the metaphor of a construction zone to describe the collaboration that takes place during ZPD-sensitive activities. It is within this zone that the learner attains maximum autonomy, transitioning from collaborative performance, characterized by reliance on external mediation, to self-regulation [34]. According to Lantolf and Poehner [37], Vygotsky perceived the ZPD as a framework for diagnosing learner abilities and as the foundation for interventions aimed at fostering their development. The ZPD, as emphasized by the authors, functions as a means to both assess and foster the developmental process, moving away from the product-oriented approach seen in conventional assessments.
Vygotsky’s SCT and the ZPD construct have noteworthy implications for assessment. In contrast to conventional assessment methods that confine assessment to individual learner performance without external assistance, SCT-informed assessment practices embrace the use of supplementary tools, such as teacher support [38]. This shift transforms the dynamics of the teacher–student relationship from a traditional hierarchical structure to a communicative one, wherein the previously neutral and impartial teacher now assumes a teaching–assisting role, and the student becomes more actively engaged in the learning process [38,39,40]. In the subsequent paragraphs, we will explain that DA embodies the SCT perspective of assessment, as just described.
With the above as context, DA is an approach designed to comprehend individual differences and their instructional implications [39]. The strongest advocates for DA believe in the incorporation of actual learning within the process, according to Lidz and Elliot [41]. Foundational assumptions guiding this approach include (a) the modifiability of mental processes, (b) viewing assessment as an interactive process with an embedded learning phase, and (c) recognizing the primary goal of assessment as assisting learners in realizing their latent abilities [39]. These assumptions set DA apart from non-dynamic assessment practices, which focus on assessing learners’ independent performance. Vygotsky [34] argued that assessing independent performance only reveals abilities that have already fully developed, leaving those in the process of emerging undisclosed. In contrast, DA not only sheds light on what learners can achieve individually but also on their potential accomplishments with appropriate mediation. Lidz [42] sums up the key characteristics of DA as embedding interventions within the assessment process and focusing on learner responses to the interventions.
In L2 research, effective mediation during ZPD activities is understood as a function of how it evolves and is negotiated in the interactions that typically transpire between a learner and a teacher. According to Aljaafreh and Lantolf [43], such mediation is graduated, contingent, and dialogic. Mediation adapted to a learner’s ZPD becomes progressively more detailed and explicit (i.e., graduated) depending on the learner’s responsiveness. Mediation is also contingent, meaning that it is provided only when needed and removed as soon as the learner demonstrates evidence of self-control and independent functioning. Graduated and contingent mediation is provided as the mediator negotiates it with the learner (i.e., dialogic). These notions have been utilized to formalize teacher–learner interactions in interactionist and interventionist DA [44]. A fundamental distinction between the two is how they approach mediation or “the relative freedom mediators have to respond to learners’ difficulties and to pursue concerns as they emerge during the interaction” [37] (p. 15). In interventionist DA, mediation is highly scripted across a prompting scale of increasing specificity in relation to a problem (such as a linguistic one) at hand. The interventionist practitioner goes through the scale prompt by prompt depending on the learner’s responsiveness. Interactionist DA is, on the other hand, more open-ended and conversational [45,46]. The mediator has mediatory wiggle room to take whatever measure that seems suitable to push the learner further in their ZPD, stopping short of providing the answer. However, giving the answer may foster development if executed at an opportune moment during the interaction [37].
In the subsequent paragraphs, we discuss some of the implications of DA for L2 reading assessment. Then, we report a select number of studies applying DA to L2 reading.

4. DA and L2 Reading Assessment

DA holds substantial implications for the assessment of L2 reading in classroom settings. It introduces a dynamic and interactive approach to assessment, emphasizing the ongoing development of language skills rather than a static measure of achievement. In the context of L2 reading, DA allows educators and teachers to not only assess a student’s current proficiency but also to understand their learning potential and the factors influencing their language development. This perspective encourages a more personalized and targeted approach to instruction by identifying specific areas of strength and areas that may benefit from additional support. By recognizing the dynamic nature of language acquisition, teachers can tailor their teaching and assessment strategies to better suit individual learner needs, fostering a more effective and inclusive learning environment for L2 readers in the classroom.

5. Empirical Research on DA and L2 Reading

DA in the field of L2 reading has recently emerged as an important area of empirical research. As educators and researchers explore the dynamic interplay between assessment practices and language learning, the focus on L2 reading adds a critical dimension to our understanding of how learners engage with and develop proficiency in a second language. This body of empirical research aims to uncover the intricacies of how DA methods influence L2 reading assessments, shedding light on their effectiveness, adaptability, and potential implications for language learning in diverse educational settings. For instance, Dörfler and colleagues [47] advocate the adoption of DA procedures as an effective diagnostic approach, emphasizing its capacity to provide valuable insights into learners’ reading competence and their broader learning abilities. Such a diagnostic approach is considered to greatly benefit teachers and other users, as standardized assessments, focused primarily on validity and reliability, often lack the depth provided by diagnostic reports. Also, standardized assessments face constraints in incorporating a variety of assessment tasks, further limiting the information that they can offer [7]. The latter assertion is further supported by Laing and Kamhi [48], who provided evidence that a dynamic measure of phonological awareness was a better predictor of reading than static measures of phonological awareness.
Generating diagnostic reports is another key aspect of DA, making it particularly well suited for assessing L2 reading in classroom settings. Such reports involve all of the key procedures of a solid diagnosis, as listed in Alderson, Brunfaut, and Harding [18]. These are listening and observing, initiating an assessment of the problem or forming a hypothesis, employing relevant tools, databases, intuition, or specialized assistance, making informed decisions, and ultimately, providing recommendations for intervention. Aligned with these procedures, the outcomes of DA procedures yield a diagnostic classification of learners and their reading abilities. These reports not only describe the learner’s improved reading skills but also investigate obstacles hindering optimal reading performance. They identify the specific needs of the learner, suggest recommendations for further instruction and intervention, and establish a baseline for the learner’s future performance [42]. Such diagnostic profiles are crucial, especially for documenting and addressing individual variations among L2 readers, such as differences in background knowledge and cultural experiences when engaging with texts [7,49]. Relatedly, Kozulin and Garb [50] recommend DA procedures as a powerful tool in the hands of L2 teachers to design individual learning plans for learners with different reading needs [50].
While DA offers notable advantages, the exploration of reading-related questions through a DA lens is in its nascent stages in both L1 and L2 settings. In the L1 domain, Dixon and colleagues [49] conducted a systematic review exploring the role of DA in predicting reading development over time. Their analysis encompassed 18 studies, revealing an absence of research explicitly examining the impact of DA on multilingual learners. The authors argue that traditional reading assessments pose challenges in predicting future reading performance, especially for multilingual learners. In contrast, they propose that DA, by focusing on learning potential in response to teaching, may offer a more accurate form of assessment. The findings endorse the use of DA for a holistic and dynamic evaluation of student reading performance.
DA has been less utilized in L2 reading. To the best of our knowledge, only a few L2 reading studies have employed DA, as reported subsequently. Nevertheless, there is a discernible emergence of interest and a gradual increase in the studies delving into this area, a select number of which we have reported subsequently. This trend reflects the growing recognition of the potential benefits and insights that DA can bring to the understanding of L2 reading processes and reading proficiency development.
Kozulin and Garb [50] were pioneers in the application of the DA approach to assess English language learners’ ability to learn, activate, and employ effective reading comprehension strategies. In their study, twenty-three learners underwent a standard reading test used for placement purposes, which necessitated the application of cognitive strategies. Following this individual assessment, teachers engaged in interactions with the learners, employing guidelines to mediate strategies employed by the learners in extracting meaning from texts, including the comprehension of text structures. After the mediation on strategy use during reading, the researchers re-evaluated the learners to investigate the mediation’s impact on their performance. The results revealed significant variations in the post-test performance of learners who initially scored similarly on the pre-tests. Notably, two participants, initially achieving 29% of correct answers on the pre-test, exhibited marked differences in their post-test scores, achieving 59% and 38% after the mediation phase. These findings served as the basis for the authors’ claim that DA provided comprehensive insights into the diverse learning needs of students who displayed the same standard performance scores. Notably, Kozulin and Garb [50] distinguished between students with high learning potential and those with low learning potential by developing a scoring system called the learning potential score (LPS):
L P S = ( S   p o s t S   p r e ) M a x   S + S   p o s t M a x   S = ( 2   S   p o s t S   p r e ) / M a x   S
S pre = pre-test score; S post = post-test score; Max S = maximum obtainable score.
Using the formula, Kozulin and Garb categorized the learners into a high learning potential subgroup (LPS > 1.0), a low learning potential subgroup with LPS < 0.71, and a large subgroup with LPS ranging from 0.79 to 0.88. The higher the LPS, the more likely those learners are to learn more from future instruction.
Other researchers have adopted Kozulin and Garb’s grading system. Poehner and Lantolf [51], for example, undertook an innovative approach by incorporating computerized DA (C-DA) principles into reading comprehension tests designed for multiple languages. The authors were particularly intrigued by the documentation of learner development as it unfolded in the moment-to-moment—called microgenesis—interactions between learners and computerized prompts throughout their participation in reading comprehension tests. These tests, similar to Kozulin and Garb’s [50] research, sought to distinguish between learners’ independent and mediated performance, predict their learning potential, and unveil learners’ ability to handle more challenging comprehension texts beyond familiar ones. The test items included mediating prompts that directed learners’ attention to specific sections where the correct response could be found. These prompts progressively narrowed the search space with each iteration and, when applicable, suggested focusing on particular information or relationships between propositions in the text during re-attempts. In cases of initial incorrect responses, learners received written prompts indicating the inaccuracy, followed by color-coded or orally presented sections of the text that required attention. Subsequently, learners were prompted to choose from the remaining response options. Unsuccessful second responses triggered prompts further refining the textual search for the correct answer. This process continued for four attempts, with the final prompt disclosing the correct answer along with an explanatory note. To assess learner performance, the authors computed two weighted numerical scores and a learner performance score (LPS). The score scale ranged from 4 for initial correct responses to 0 when the answer was revealed and explained to learners. Summing the s represented each learner’s actual (unmediated) performance, while the total points earned for mediated responses were summed and reported as the mediated score. Following Kozulin and Garb’s methodology [50], Poehner and Lantolf calculated the LPS by considering the difference between the actual and mediated scores for each learner. The authors found statistically significant differences between learners’ actual and mediated performance. Furthermore, the authors computed a score assessing learners’ capacity to apply their learning beyond familiar testing scenarios, aiming to gauge their proficiency in comprehending more challenging texts. The findings revealed significant low-to-moderate correlation coefficients between LPSs and transfer scores. Poehner and Lantolf [51] drew upon these findings to reach the conclusion that DA provides valuable information about learners’ existing abilities, their ZPD, and how learners are likely to benefit from future instruction.
Davin, Troyan, and Hellmann [52] synthesized components from the aforementioned studies to devise a paper-and-pencil dynamic reading comprehension task (D-RCT). Each D-RCT comprised three sections encompassing a total of 19 items, namely, Key Words, Main Idea, and Supporting Details. Accompanying each section were three prompts arranged in a hierarchical order, with the first prompt being the least explicit and the third the most explicit. These pre-scripted prompts were orally administered by a Spanish instructor to 15 students in a combined fourth- and fifth-grade Spanish as a foreign language classroom at a primary school affiliated with a prominent public university in the Northeastern United States.
A significant contribution of Davin et al.’s study [52] was the adaptation of Kozulin and Garb’s [50] formula to formulate a new equation aimed not only at assessing aspects addressed by Kozulin and Garb [50] and Poehner and Lantolf [51] but also at gauging responsiveness to mediation, termed the mediation value score (MVS). To calculate the MVS, Davin and colleagues [52] divided the additional points gained by each student as a result of mediation by the maximum potential score attainable. Consequently, the MVS quantified the degree to which students benefited from the mediation prompts provided, regardless of their absolute score. Employing this novel formula, Davin and colleagues [52] could ascertain whether the mediation prompts were appropriately calibrated to students’ ZPD across the three sections of each D-RCT. Drawing from their findings, the authors advocated the concurrent utilization of the MVS and Kozulin and Garb’s LPS to evaluate the suitability of mediation prompts tailored to individual learners.
Yang and Qian [53] employed C-DA to identify reading difficulties among a group of Chinese learners of English. In their study, the authors instructed the pilot cohort to articulate their thought processes after completing a reading comprehension test with multiple-choice questions. Utilizing these notes, the authors identified challenging areas and provided appropriate mediation to the pilot cohort. The generated mediation prompts and the scoring scale developed for assessing the pilot cohort, borrowed from Kozulin and Garb [50] and Lantolf and Poehner [51], were incorporated into a C-DA program. A second group of participants engaged in reading a distinct text using C-DA, and individual profiles for each learner were generated based on their performance. The analysis of the data produced by the C-DA program revealed its superiority over traditional static assessment, showcasing stronger diagnostic capabilities and deeper insights for subsequent teaching.
Other researchers have utilized creative computerized DA procedures. Shabani [54], for example, conducted a study to explore the impact of computerized dynamic assessment on the reading comprehension processes of 100 intermediate-level Persian-speaking students majoring in English in Iran. In this investigation, participants’ incorrect responses to reading comprehension items prompted the initiation of mediation. The mediation process involved computerized support, beginning with textual cues and progressing to more elaborate visual cues. The author argued that the visual cues provided more information compared to textual ones. To assess this, a software program was designed to prompt participants four times. These prompts included an invitation to read the text again, view a visual cue while re-reading the text, view a second visual cue and repeat the reading, and finally, view a third visual cue and re-read the text. Shabani created individualized learner profiles by assessing the number of prompts needed to reach the correct answer, with a deduction of 10% from the maximum achievable score each time a prompt was provided. Profiles derived from DA resulted in a classification of learners and their abilities that was described as “strikingly different” compared to classifications based on traditional solo reading comprehension test results. Among the 79 students who scored zero on the standard test, 67 (84%) demonstrated the ability to successfully address comprehension items with suitable mediation during the computerized DA process. Shabani attributed the heightened performance to increased awareness of reading processes among participants, a result achieved through prompts.
The limited use of DA in L2 reading, as reviewed above, indicates a gap that needs further attention and exploration. The pioneering work of Kozulin and Garb [50] and subsequent studies by Poehner and Lantolf [51] and Davin et al. [52] showcased the effectiveness of DA in assessing learners’ abilities and predicting their learning potential. Nonetheless, the need for more comprehensive and diverse studies in L2 reading remains evident. The emerging interest in and gradual increase in studies exploring DA in L2 reading signal a growing recognition of its potential benefits, necessitating further research to harness its full capabilities in understanding L2 reading processes and proficiency development.

6. Conclusions and Implications

Recent developments in L2 research have conceptualized the reading construct as a multicomponent ability. Parallel to these theoretical advances, scholars have called for developing commensurate assessment practices to capture the complexity underlying L2 reading. Notably, classroom-based assessments of reading have garnered significant attention due to their diagnostic value [7]. With its theoretical roots in the Vygotskyan SCT, DA fits into the fresh multicomponent landscape of L2 reading, owing to the approach’s intrinsically diagnostic and remedial nature.
It is important to recognize that while DA could serve as a valuable tool for identifying certain challenges faced by L2 learners in reading, DA should not be taken as a universal solution for all reading difficulties. This is because reading is a complex task and the challenges students face can stem from diverse sources such as their language proficiency, cultural differences, educational backgrounds, and individual learning styles, among others. No single approach can address every aspect of such complexities, underscoring the importance of a multifaceted approach to support L2 learners in overcoming their reading obstacles. Furthermore, like any educational approach, the effectiveness of DA hinges on a multitude of factors, including the expertise of the mediator, the timeliness of the feedback, and the effective collaboration between the mediator and the learner. These factors all play integral roles in shaping the outcomes of DA.
It is also important to acknowledge that the existing body of DA studies on L2 reading is not yet extensive enough to provide definitive conclusions regarding its efficacy across all facets of L2 reading. The current state of research suggests that while DA shows promise, particularly in identifying specific challenges faced by L2 learners, a comprehensive understanding of its effectiveness is still evolving. Therefore, we should interpret the insights presented in this paper with due consideration of the various factors mentioned above, while also recognizing the evolving nature of research in this field, which underscores the need for a cautious and open-minded interpretation of the findings discussed so far.
With the above in mind, future directions in exploring L2 reading through DA promise an exciting avenue for understanding the diverse skills and subskills associated with reading proficiency. For example, researchers can use DA to uncover learners’ strengths and areas in need of improvement like grammar and comprehension strategies, dissecting reading into constituent elements. This focused examination can shed light on whether certain skills are more amenable to evaluation through DA than others. Researchers can also explore the dynamic interplay between such various skills. The interdependence of skills like decoding, vocabulary knowledge, and comprehension strategies, for example, could be investigated through DA, offering insights into how these components interact and influence overall reading proficiency. As noted by Kozulin and Garb [50], the instructional value of DA is that it can help develop individual learning plans for students with different learning needs. Unraveling the interconnections of skills through the lens of DA holds the potential to inform tailored instructional strategies that address the multifaceted nature of L2 reading development.
Research in L2 reading can also explore the role of technological tools in DA, such as artificial intelligence (AI), to provide more accurate and insightful assessments of L2 reading skills. DA appears to be on the cusp of a pivotal phase in its development, transitioning from traditional human-to-human interactions to computerized DA, and is now entering the era of AI. While computerized DA is still evolving, the introduction of AI could revolutionize the field, as AI models have the potential to significantly enhance the precision and depth of DA aimed at evaluating L2 reading skills thanks to their natural language processing capabilities. AI-driven approaches cannot only expand the range of assessment tools, including DA, but can also enable a more comprehensive examination of the diverse needs of learners, including their varied linguistic backgrounds, learning styles, and potential learning or reading disabilities. However, the extent of AI’s contributions to L2 DA remains to be seen, given the current capabilities of AI. AI models, such as ChatGPT, need further development before they can practically and effectively deliver DA-informed mediation due to their limited understanding of the learning context, including nuances in student behavior, individual learning styles, and real-time adjustments required in instructional delivery.
Finally, future research directions could focus on refining the adaptability and inclusivity of DA in diverse learning environments. Scholars may investigate the effectiveness of DA in accommodating learners with varying linguistic backgrounds, learning styles, and cognitive profiles. By considering the cultural and contextual dimensions of L2 reading, researchers can explore how DA can be culturally responsive, ensuring that assessments accurately capture the diverse experiences and challenges faced by learners. Dixon and colleagues [49], for example, have noted the potential of DA in tapping into the variance in the reading proficiency of culturally and linguistically diverse students. This inclusive approach to future research aims to advance the field of L2 reading assessment by promoting equitable evaluation practices that account for the rich tapestry of learners’ linguistic and cultural backgrounds.

Author Contributions

A.K. and H.N. contributed equally to the conceptualization and writing of the article. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Petscher, Y.; Cabell, S.Q.; Catts, H.W.; Compton, D.L.; Foorman, B.R.; Hart, S.A.; Lonigan, J.C.; Phillips, B.M.; Schatschneider, C.; Steacy, L.M.; et al. How the science of teaching informs 21st-century education. Read. Res. Quar. 2020, 55, S267–S282. [Google Scholar] [CrossRef]
  2. Duke, N.K.; Cartwright, K.B. The science of reading progresses: Communicating advances beyond the simple view of reading. Read. Res. Quar. 2021, 56, S25–S44. [Google Scholar] [CrossRef]
  3. Castles, A.; Rastle, K.; Nation, K. Ending the reading wars: Reading acquisition from novice to expert. Psychol. Sci. Public Interest 2018, 19, 5–51. [Google Scholar] [CrossRef] [PubMed]
  4. Grabe, W. Research on teaching reading. Annu. Rev. Appl. Linguist. 2004, 24, 44–49. [Google Scholar] [CrossRef]
  5. Grabe, W.; Stoller, F. Teaching and Researching Reading; Routledge: New York, NY, USA, 2020. [Google Scholar]
  6. Adlof, S.; Perfetti, C.; Catts, H. Developmental changes in reading comprehension: Implications for assessment and instruction. In What Research Has to Say About Reading Instruction, 4th ed.; Samuels, S., Farstrup, A., Eds.; International Reading Association: Newark, DE, USA, 2011; pp. 186–214. ISBN 978-087-207-177-3. [Google Scholar]
  7. Grabe, W.; Jiang, X. Assessing reading. In The Companion to Language Assessment; Kunnan, A.J., Ed.; John Wiley & Sons: New Jersey, NJ, USA, 2014; pp. 1–16. ISBN 978-047-065-533-7. [Google Scholar]
  8. Kim, Y.S.G. Developmental, component-based model of reading fluency: An investigation of predictors of word-reading fluency, text-reading fluency, and reading comprehension. Read. Res. Quar. 2015, 50, 459–481. [Google Scholar] [CrossRef] [PubMed]
  9. Khalifa, H.; Weir, C.J. Examining Reading: Research and Practice in Assessing Second Language Reading; Cambridge University Press: Cambridge, UK, 2009; ISBN 978-052-173-671-8. [Google Scholar]
  10. Purpura, J.E. Second and foreign language assessment. Mod. Lang. J. 2016, 100, 190–208. [Google Scholar] [CrossRef]
  11. Berry, R. Assessment for Learning; Hong Kong University Press: Hong Kong, 2008; ISBN 978-962-209-957-9. [Google Scholar]
  12. Black, P.; Harrison, C.; Lee, C.; Marshall, B.; Wiliam, D. The nature and value of formative assessment for learning. Improv. Sch. 2003, 6, 7–22. [Google Scholar] [CrossRef]
  13. Black, P.; Wiliam, D. Assessment for learning in the classroom. In Assessment and Learning; Gardner, J., Ed.; Sage Publications: Newcastle upon Tyne, UK, 2006; pp. 9–25. ISBN 978-144-629-024-8. [Google Scholar]
  14. Jang, E.E. Demystifying a Q-Matrix for making diagnostic inferences about L2 reading skills. Lang. Assess. Quar. 2009, 6, 210–238. [Google Scholar] [CrossRef]
  15. Koda, K. Reading and language learning: Crosslinguistic constraints on second language reading development. Lang. Lear. 2007, 57, 1–44. [Google Scholar] [CrossRef]
  16. Stanovich, K. Progress in Understanding Reading: Scientific Foundations and New Frontiers; Guilford Press: New York, NY, USA, 2000; ISBN 157-230-564-9. [Google Scholar]
  17. Alderson, J.C. Assessing Reading; Cambridge University Press: Cambridge, UK, 2000; ISBN 978-051-173-293-5. [Google Scholar]
  18. Alderson, J.C.; Brunfaut, T.; Harding, L. Towards a theory of diagnosis in second and foreign language assessment: Insights from professional practice across diverse fields. Appl. Linguist. 2015, 16, 236–260. [Google Scholar] [CrossRef]
  19. Nassaji, H. State-of-the-art article: The role and importance of lower-level processes in second language reading. Lang. Teach. 2014, 47, 1–37. [Google Scholar] [CrossRef]
  20. Nassaji, H. Higher-level and lower-level text processing skills in advanced ESL reading comprehension. Mod. Lang. J. 2003, 87, 261–276. [Google Scholar] [CrossRef]
  21. Lim, H. Test format effects: A componential approach to second language reading. Lang. Test. Asia 2019, 9, 6. [Google Scholar] [CrossRef]
  22. Jeon, E.H.; Yamashita, J. L2 reading comprehension and its correlates: A meta-analysis. Lang. Learn. 2014, 64, 160–212. [Google Scholar] [CrossRef]
  23. Shiotsu, T.; Weir, C.J. The relative significance of syntactic knowledge and vocabulary breadth in the prediction of reading comprehension test performance. Lang. Test. 2007, 24, 99–128. [Google Scholar] [CrossRef]
  24. Harding, L.; Alderson, J.C.; Brunfaut, T. Diagnostic assessment of reading and listening in a second or foreign language: Elaborating on diagnostic principles. Lang. Test. 2015, 32, 317–336. [Google Scholar] [CrossRef]
  25. Alderson, J.C.; Haapakangas, E.L.; Huhta, A.; Nieminen, L.; Ullakonoja, R. The Diagnosis of Reading in a Second or Foreign Language; Routledge: Abingdon, UK, 2015; ISBN 978-041-566-290-1. [Google Scholar]
  26. Lee, Y.W. Diagnosing diagnostic language assessment. Lang. Test. 2015, 32, 299–316. [Google Scholar] [CrossRef]
  27. Lee, Y.W.; Sawaki, Y. Application of three cognitive diagnosis models to ESL reading and listening assessments. Lang. Assess. Quar. 2009, 6, 239–263. [Google Scholar] [CrossRef]
  28. Lantolf, J.P. Sociocultural Theory and Second Language Learning; Oxford University Press: Oxford, UK, 2000; ISBN 978-019-442-160-7. [Google Scholar]
  29. Lantolf, J.P.; Thorne, S.L. Sociocultural theory and second language learning. In Theories in Second Language Acquisition, van Patten, B.J., Williams, J., Eds.; Routledge: New York, NY, USA, 2007; pp. 201–224. ISBN 978-020-362-894-2. [Google Scholar]
  30. Swain, M.; Deters, P. New mainstream SLA theory: Expanded and enriched. Mod. Lang. J. 2007, 91, 820–836. [Google Scholar] [CrossRef]
  31. Swain, M.; Kinnear, P.; Steinman, L. Sociocultural Theory in Second Language Education: An Introduction through Narratives; Multilingual Matters: Bristol, UK, 2015; ISBN 978-178-309-318-2. [Google Scholar]
  32. John-Steiner, V.; Mahn, H. Sociocultural approaches to learning and development: A Vygotskian framework. Educ. Psychologist. 1996, 31, 191–206. [Google Scholar] [CrossRef]
  33. Wertsch, J.V. Voices of the Mind: A Sociocultural Approach to Mediated Action; Harvard University Press: Boston, MA, USA, 1991; ISBN 978-067-404-510-1. [Google Scholar]
  34. Vygotsky, L.S. Mind in Society: The Development of Higher Psychological Processes; Cole, M., John-Steiner, V., Scribner, S., Souberman, E., Eds.; Harvard University Press: Boston, MA, USA, 1978. [Google Scholar]
  35. Wertsch, J.V. Mediation. In The Cambridge Companion to Vygotsky; Daniels, H., Cole, M., Wertsch, J.V., Eds.; Cambridge University Press: Cambridge, UK, 2007; pp. 178–192. ISBN 978-113-900-149-6. [Google Scholar]
  36. Newman, D.; Griffin, P.; Cole, M. The Construction Zone: Working for Cognitive Change in School; Cambridge University Press: Cambridge, UK, 1989; ISBN 978-052-138-942-6. [Google Scholar]
  37. Lantolf, J.P.; Poehner, M.E. Dynamic assessment in the classroom: Vygotskian praxis for second language development. Lang. Teach. Res. 2010, 15, 11–33. [Google Scholar] [CrossRef]
  38. Gipps, C. Sociocultural perspectives on assessment. In Learning for Life in the 21st Century; Wells, G.W., Claxton, G., Eds.; Blackwell Publishing: Bloomsbury, UK, 2002; pp. 73–83. ISBN 978-063-122-330-6. [Google Scholar]
  39. Lidz, C.S.; Gindis, B. Dynamic assessment of the evolving cognitive functions. In Vygotsky’s Educational Theory in Cultural Context; Kozulin, A., Gindis, B., Ageyev, V.S., Miller, S.M., Eds.; Cambridge University Press: Cambridge, UK, 2003; pp. 99–119. ISBN 978-051-184-097-5. [Google Scholar]
  40. Taylor, P.C.; Fraser, B.J.; Fisher, D.L. Monitoring constructivist classroom learning environments. Int. J. Educ. Res. 1997, 27, 293–301. [Google Scholar] [CrossRef]
  41. Lidz, C.S.; Elliott, J.G. Dynamic Assessment: Prevailing Models and Applications; Elsevier: Amsterdam, The Netherlands, 2000; ISBN 076-230-424-3. [Google Scholar]
  42. Lidz, C.S. Leaning toward a consensus about dynamic assessment: Can we? Do we want to? J. Cogn. Edu. Psychol. 2014, 13, 292–307. [Google Scholar] [CrossRef]
  43. Aljaafreh, A.; Lantolf, J.P. Negative feedback as regulation and second language learning in the zone of proximal development. Mod. Lan. J. 1994, 78, 465–483. [Google Scholar] [CrossRef]
  44. Lantolf, J.P.; Poehner, M.E. Dynamic assessment of L2 development: Bringing the past into the future. J. Appl. Linguist. 2004, 1, 49–72. [Google Scholar] [CrossRef]
  45. Kushki, A.; Rahimi, M.; Davin, K.J. Dynamic assessment of argumentative writing: Mediating task response. Assess. Writ. 2022, 52, 100606. [Google Scholar] [CrossRef]
  46. Kushki, A.; Nassaji, H.; Rahimi, M. Interventionist and interactionist dynamic assessment of argumentative writing in an EFL program. System 2022, 107, 102800. [Google Scholar] [CrossRef]
  47. Dörfler, T.; Golke, S.; Artelt, C. Dynamic assessment and its potential for the assessment of reading competence. Stud. Educ. Eval. 2009, 35, 77–82. [Google Scholar] [CrossRef]
  48. Laing, S.P.; Kamhi, A. Alternative assessment of language and literacy in culturally and linguistically diverse populations. Lang. Speech Hear. Serv. Sch. 2003, 34, 44–55. [Google Scholar] [CrossRef] [PubMed]
  49. Dixon, C.; Oxley, E.; Gellert, A.S.; Nash, H. Dynamic assessment as a predictor of reading development: A systematic review. Read. Writ. 2023, 36, 673–698. [Google Scholar] [CrossRef]
  50. Kozulin, A.; Garb, E. Dynamic assessment of EFL text comprehension. Sch. Psychol. Int. 2002, 23, 112–127. [Google Scholar] [CrossRef]
  51. Poehner, E.P.; Lantolf, J.P. Bringing the ZPD into the equation: Capturing L2 development during computerized dynamic assessment (C-DA). Lang. Teach. Res. 2013, 17, 323–342. [Google Scholar] [CrossRef]
  52. Davin, K.J.; Troyan, F.J.; Hellmann, A.L. Classroom dynamic assessment of reading comprehension with second language learners. Lang. Sociocult. Theory 2014, 1, 1–23. [Google Scholar] [CrossRef]
  53. Yang, Y.; Qian, D.D. Assessing English reading comprehension by Chinese EFL learners in computerized dynamic assessment. Lang. Test. Asia 2017, 7, 11. [Google Scholar] [CrossRef]
  54. Shabani, K. Dynamic assessment of L2 learners’ reading comprehension processes: A Vygotskian perspective. Procedia Soc. Behav. Sci. 2012, 32, 321–328. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kushki, A.; Nassaji, H. L2 Reading Assessment from a Sociocultural Theory Perspective: The Contributions of Dynamic Assessment. Educ. Sci. 2024, 14, 342. https://doi.org/10.3390/educsci14040342

AMA Style

Kushki A, Nassaji H. L2 Reading Assessment from a Sociocultural Theory Perspective: The Contributions of Dynamic Assessment. Education Sciences. 2024; 14(4):342. https://doi.org/10.3390/educsci14040342

Chicago/Turabian Style

Kushki, Ali, and Hossein Nassaji. 2024. "L2 Reading Assessment from a Sociocultural Theory Perspective: The Contributions of Dynamic Assessment" Education Sciences 14, no. 4: 342. https://doi.org/10.3390/educsci14040342

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop