Next Article in Journal
Developing Computational Thinking through Mathematics: An Evaluative Scientific Mapping
Next Article in Special Issue
Feelings about School in Gifted and Non-Gifted Children: What Are the Effects of a Fine Art Program in Primary School?
Previous Article in Journal
Teaching in Diverse Lower and Upper Secondary Schools in Norway: The Missing Links in Student Teachers’ Experiences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Fallacy of Using the National Assessment Program–Literacy and Numeracy (NAPLAN) Data to Identify Australian High-Potential Gifted Students

by
Michelle Ronksley-Pavia
School of Education and Professional Studies and Griffith Institute for Educational Research, Griffith University, Southport, QLD 4222, Australia
Educ. Sci. 2023, 13(4), 421; https://doi.org/10.3390/educsci13040421
Submission received: 5 April 2023 / Revised: 13 April 2023 / Accepted: 14 April 2023 / Published: 20 April 2023
(This article belongs to the Special Issue Identifying and Supporting Giftedness and Talent in Schools)

Abstract

:
In Australia, gifted or talented students are defined according to the widely accepted model proposed by Gagné, where giftedness is understood as potential, and talent is shown through competencies (or achievements); in this definition there is a clear differentiation between the two constructs. Most Australian education jurisdictions espouse Gagné’s definitions and use a variety of mechanisms for identifying gifted and talented students—a commonly used identification practice is the results from the Australian National Assessment Program–Literacy and Numeracy (NAPLAN) test. This article sets out to explore the fallacy of using the NAPLAN results to identify giftedness in high-potential (gifted) students in Australia, outlining key reasons why the NAPLAN is unsuitable as an identification instrument for giftedness. Moreover, it explores the erroneous use of the NAPLAN as an identification tool for giftedness when it was never designed, validated, or intended as such an instrument.

1. Introduction

In Australia, gifted or talented students are defined according to the widely accepted model proposed by Gagné [1], whereby giftedness is conceptualized as potential, and talent is evidenced through competencies (or achievement); thus, providing a distinct separation between the constructs of giftedness and talent. Gifted or talented students, like all diverse students, require differentiated instruction to meet their learning needs. One challenging part of being able to provide differentiated programming for these learners is the identification of giftedness and talent. Australian schools use an array of mechanisms for identifying gifted and talented students—a common one is the results of the annual National Assessment Program–Literacy and Numeracy (NAPLAN) testing. In the Australian context, talent, particularly academic talent, can be seen as being relatively straightforward to identify through a student’s achievements, on such tests as the NAPLAN, for example. However, what is far more difficult to identify through school assessments and standardised tests is giftedness, or potential. The use of NAPLAN results by schools for identifying giftedness in high-potential (gifted) students is particularly problematic.

2. Defining Giftedness and Talent in the Australian Context

There are multiple definitions of giftedness and talent in use across the globe, yet there is no consensus on shared definitions [2]. However, for nigh on two decades Australian education systems have been captivated by evolving forms of Gagné’s [1] Differentiating Model of Giftedness and Talent (DMGT, formerly the Differentiated model). This model has provided a clear distinction between the conceptualization of giftedness and talent. The precise wording from Gagné’s DMGT [3] to define giftedness and talent is thus:
Giftedness [emphasis in original] designates the possession and use of biologically anchored and informally developed outstanding natural abilities or aptitudes (called gifts), in at least one ability domain, to a degree that places an individual at least among the top 10% of age peers.
Talent [emphasis in original] designates the outstanding mastery of systematically developed competencies (knowledge and skills) in at least one field of human activity to a degree that places an individual at least among the top 10% of ‘learning peers’, namely, those having accumulated a similar amount of learning time from either current or past training.
(p. 10)
According to Gagné’s model, the label of giftedness is associated with potential, where giftedness is said to be an outstanding level of aptitude in a particular domain [3]. For giftedness, this constitutes the top 10 percent of age peers in any one of six Aptitude domains: Intellectual (e.g., g factor–general intelligence, fluid reasoning, and crystallized reasoning); Creative (e.g., problem-solving, imagination); Social (e.g., perceptiveness, leadership); Perceptual (e.g., vision, proprioception); Muscular (e.g., power, strength); or Motor Control (e.g., agility, coordination) [1].
Conversely, the term talent is associated with achievements (or competencies) and conceptualized as outstanding mastery of competencies in a particular field [1]. Talent is reserved for individuals who are among the top 10 percent of peers in any of nine Fields of Competencies: Academic (e.g., languages, mathematics); Technical (e.g., construction, manufacturing); Science and Technology (e.g., engineering, medical); Arts (e.g., performing, applied); People Services (e.g., Health, community); Management/Sales (e.g., management, marketing); Business Systems (e.g., financial, distribution); Sports and Athletics (e.g., Sporting talents); or, Games (e.g., video, puzzles).
Of course, for gifts to be transformed into talents (according to Gagné’s model), there needs to be a process of talent development. This talent development process involves “a progressive transformation through a long-term [emphasis added] learning process” [3] (p. 11), whereby environmental catalysts (e.g., social, interpersonal, and educational) and intrapersonal catalysts (e.g., curriculum provisions, motivation, volition, milieu), impact whether gifts are developed into talents or not. Gagné refers to this development of gifted potential into talent actualization, evident through the competencies, as the developmental process. This developmental process, in conjunction with the required catalysts (of course subject to Chance factors), is vital for talent (competency) development.
Gagné’s definition of giftedness thus emphasizes potential among age-peers, whereas talent emphasizes ‘time’ spent on learning/training/talent development (but also the quality of time spent on these), in comparison to “learning peers”, ref. [3] (p. 3)—not necessarily age peers (for reader interest, see the work of Ericsson [4] on “world class performers”). This is an important distinction, meaning that talent may never be developed or actualized during schooling years; rather, talent actualization is likely to be a life-long process (or at least longer than school-years) (F. Gagné, personal communication, 11 February 2021).
Conceptualizations of giftedness in North America incorporate the concept of talent development as a life-long process [5]. This conceptualization has similarities with Gagné’s definition of talent, which involves a “long development process that has its foundations in remarkable aptitudes [gifts/high potential]”, ref. [6] (para. 1). The giftedness definition from the National Association for Gifted Children (NAGC) in the USA states that in young children, giftedness can be evidenced in domain-specific high achievement, high general ability, or in a rapid rate of learning compared to age-peers [7]. As children grow into adolescence, high motivation and achievement in a domain (e.g., mathematics, music, language) is seen as being part of the conceptualization of giftedness [7]. Unlike Gagné’s definition, the NAGC [7] definition denotes giftedness as outstanding levels of aptitude—exceptional ability to reason and learn, or competence in one or more domains. Contrasting Gagné’s [1] definitions in the DMGT, the NAGC definition does not explicitly differentiate between giftedness and talent. This is a major difference in conceptualizations of giftedness and talent between Australia and North America.

3. The Australian Context and Identification of Giftedness

In Australia, there are six states and two territories, with different state and territory education departments, and regional departments in boundary-specific regions within these states and territories. Each state and territory has some form of policy (or advice) around inclusive education practices (some of which may mention gifted and talented students), and/or a gifted education policy of some sort (although some are make-shift at best). For the most part, some of the more extensive state and territory policy documents outline suitable identification practices for schools. Where policies exist, they more often than not cite Gagné’s DMGT in some form (e.g., the superseded 2009 version) as being the educational jurisdiction’s conceptualization of giftedness and talent. Accordingly, the identification practices espoused by education jurisdictions should follow Gagné’s [1] differentiation between giftedness and talent in his model—the conceptualization of giftedness as potential across the six Aptitude Domains, and the conceptualization of talent as achievement in the nine Competency Fields.
Australia is purportedly an egalitarian society where the expectation is that everyone receives a ‘fair go’. Yet, there exists what is known as the ‘tall poppy syndrome’, a cultural practice where those who flourish before their peers are ‘cut down’ and everyone is held back so they can flourish at the same time [8,9]. Therefore, it is important to ensure that identification practices are equitable.
An overview of identification assessments used in the Australian context for identifying giftedness as an “outstanding level of aptitude in any domain”, ref. [3] (p. 10), can be seen in Table 1. For the purposes of this article, we will concentrate on exploring the Domain of Intellectual giftedness from Gagné’s [3] model. Recall that according to Gagné, intellectual giftedness is the precursor for academic talent development [3]. The DMGT shows that giftedness has many dimensions; nevertheless, Gagné suggests that intellectual giftedness can be understood as “unidimensional”, ref. [3] (p. 14), and its most relevant measure is the Intelligence Quotient (IQ) score, which is seen as the “best measure for that unitary core, commonly called ‘the g factor’ [or general intelligence factor]”, ref. [3] (p. 14). The g factor encompasses general intelligence, fluid reasoning, and crystallized reasoning. Therefore, a relevant assessment for intellectual giftedness would be an IQ score derived from an appropriate psychometric assessment (e.g., Screening Assessment for Gifted Elementary and Middle School Students-3 (SAGES-3); Weschler Intelligence Scales–WISC-V, WPPSI-IV; Stanford Binet Intelligence Scales–SB-5; Raven’s Progressive Matrices–RPM, Woodcock Johnson-IV–WJ IV) [3,10]. However, the practicality of using IQ instruments may be beyond the resources of schools, in terms of costliness and time required. Improving systemic validity for identifying gifted learners is also challenging due to the limits of psychoeducational assessments [11].
Relying solely, or over-relying, on any kind of psychometric assessment for identifying giftedness (as potential) has a significant number of well-recognized limitations, which may in some instances render it less useful (e.g., does not assess creativity or divergent thinking skills). It is worthwhile briefly noting here that psychometric assessment results, such as the full-scale IQ scores (FSIQ), can be impacted by a number of factors; for example, twice-exceptionality (giftedness and co-occurring disability), culture, educational opportunities, socio-economic factors, and a number of other problems (see for example, Flynn [12], Gould [13], Murdoch [14]).
In some instances (e.g., twice-exceptionality), and for some IQ assessments (e.g., the WISC), the General Abilities Index (GAI) can be a more useful description of an individual’s intellectual ability than the FSIQ (see Weiss et al. [15] for specific details). The GAI may be preferred as an alternative way of summarizing overall ability. Thus, the GAI can provide different impressions of a student’s overall ability when there is variability across index scores on these tests [15]. Because the GAI does not incorporate Working Memory (WM) or Processing Speed (PS) subtest scores, it may provide clarity for some individuals who score lower on these areas but who show superior intelligence in problem-solving and conceptual thinking [15]. Variability in WM and PS subtest scores for twice-exceptional individuals occurs due to weaknesses in working memory and processing speed, which are characteristic of some disabilities, such as attentional disorders [15]. In these individuals, the GAI may be higher than the FSIQ and thus capture the “maximum potential of the child being assessed”, ref. [15] (p. 402).
However, IQ testing is imperfect [12,14], and extensive cautions need to be observed over the appropriateness, use, and application of IQ assessment instruments. Current expanded understandings of human intelligence have moved away from fixed notions of intelligence (predetermined by genetics), as measured by IQ tests (e.g., knowledge base, abstract thinking, mental processing speed) (see also Dai and Sternberg [16], Renzulli [17]). Additionally, there is much more to giftedness than just intelligence; it is well-recognized that intelligence tests measure a very narrow set of psychometric skills and should not be used as the only, or even the main, way of assessing giftedness [18]. The Flynn effect [12] (or secular rise in IQ scores) refers to the increase over time of IQs—approximately 3 points every 10 years. The Flynn effect has shown that intelligence, as measured by IQ tests, is changeable. This change has unknown causes; however, speculation relates to elements such as schooling, test familiarity, complex and stimulating modern environments, and improved nutrition (at least in developed nations) [12].
IQ testing can be culturally biased with respect to individuals from different cultures, backgrounds, students with disabilities, students with English as an additional language and/or dialect, and students from low-socio-economic backgrounds [3,19,20]. Furthermore, as Sternberg [18] observed, “the heritability of intelligence varies by social class” (p. 7). With these limitations in mind, psychometric assessment is well-recognized and highly validated in identifying and assessing giftedness as potential [21,22].
It is considerably easier for Australian schools to identify academic talent rather than intellectual giftedness [23], due to the tangibleness of achievement evidenced from school assessment results (e.g., exams, assignments) and standardised assessments (e.g., NAPLAN). This is in contrast to the much more intangible nature of giftedness as potential. However, if educational jurisdictions—and subsequently schools—are stating they have processes for identifying giftedness that only identify talent (i.e., achievement), then there is a considerable disparity between understandings of Gagné’s model, the conceptualizations of giftedness and talent, and the practices associated with, and purportedly based on this model. Identification methods and conceptual definitions of giftedness need to have adequate specificity and internal consistency that connect with operational definitions [24]. However, as McBee and Makel [24] argue, it is not that straightforward; “quantitative or psychometric analysis [emphasis in original] must accompany quantitative or psychometric arguments [emphasis in original] when conceptual or theoretical ideas about giftedness are being considered” (pp. 1–2). Though this discussion is beyond the scope of the current article, it is worthy of deliberation.

4. The Australian National Assessment Program–Literacy and Numeracy (NAPLAN)

In order to make the case against using NAPLAN as an identification measure for giftedness, it is necessary to first provide an outline of what NAPLAN is. This section briefly explains the four tests that comprise the annual NAPLAN assessments: (1) writing test; (2) reading test; (3) conventions of language test; and (4) numeracy test.
The Australian National Assessment Program–Literacy and Numeracy (NAPLAN) tests are administered annually in March for students in Grades 3, 5, 7, and 9 (prior to 2023 NAPLAN was in May). Australia is in the Southern Hemisphere, and so the school year begins towards the end of January (after the annual summer break) and ends in early December (prior to the annual summer break); so, the NAPLAN tests take place approximately two months into the new school year. The assessments test students’ writing, reading, conventions of language, and numeracy skills in timed tests conducted over three days [25]. The tests were first implemented in 2008 under the responsibility of the Australian Curriculum, Assessment and Reporting Authority (ACARA), which was also established in the same year to develop the Australian National Curriculum. Each of these tests is further outlined below.
The NAPLAN writing test examines students’ knowledge and skills in either imaginative writing, informative writing, or persuasive writing, with all students receiving the same genre (text type) for the test irrespective of schooling year level. Students are given a writing stimulus or prompt, and write a response in the required genre. There is no choice of text type, and students and teachers are not aware of what the genre will be until the test [26].
The NAPLAN reading test measures each student’s literacy proficiency in reading and comprehending written English texts, and their knowledge and interpretation of language conventions [26]. The test consists of a range of texts with different writing styles where students must read the texts and answer related questions through responding to multiple-choice questions.
The NAPLAN conventions of language test assesses students’ spelling, grammar, and punctuation. The focus of this test is on students’ use and knowledge of written standard Australian English, with multiple-choice, text-entry, and drag-and-drop-type responses in the online version of the test [26].
The NAPLAN numeracy test measures students’ achievement in numeracy, including their mathematical knowledge, skills and understanding, fluency, problem-solving, and reasoning across algebra, measurement and geometry, and statistics and probability [26]. In Grade 7 and Grade 9, there are two sections in the NAPLAN numeracy test; a short non-calculator section for students to demonstrate arithmetical calculation skills, and a second section where calculators are allowed [26].
Standardization of the annual NAPLAN test is said to enable comparisons of students in a given year level with other years [27]. As a standardised achievement test, NAPLAN provides an annual one-point in time measure of Australian school students’ achievement in those aforementioned areas of literacy and numeracy. This snap-shot view can only “provide vignettes of student achievement rather than a detailed portfolio of learning progress over time”, ref. [28] (p. 10), which means results provide limited information about student learning and achievement in those specific areas at that one point in time.
The NAPLAN assesses acquired knowledge and skills—literacy proficiency in specific areas of reading and writing, knowledge and interpretation of language conventions (spelling, grammar and punctuation), and numeracy achievement in specific areas. Achievement in NAPLAN testing is based on what learning students have been able to access to date, and what they have understood and can convey during the testing.
Annual reporting of NAPLAN results is aimed at ensuring that there is a national understanding of student achievement in literacy and numeracy, and how each state’s and territory’s schools are performing [29]. Results from NAPLAN testing show what level students are at in comparison with other students and schools, and nationally across state and territory schools. Without nationally comparable data on how students are performing, there would be limited information about student achievement in the areas of literacy and numeracy that are assessed by NAPLAN [29].
The NAPLAN results were originally intended to provide data to support teaching and learning in Australian schools, where students and parents were to “discuss progress and compare performance against national peers”, ref. [30] (p. 1). The intention was also that individual schools could map their students’ progress, identify strengths and weaknesses in teaching programs, and set goals in these areas for their school. A core aim of NAPLAN was to “help teachers to challenge high performers and identify students needing support”, for the benefit of “school systems and governments” where valuable data would be used “to support good teaching and learning, and school improvement”, ref. [30] (p. 1). The original premise for implementing NAPLAN was based on the idea of supporting “all children to gain ‘a world class’ education”, ref. [31] (p. 392). The subsequent use of NAPLAN results fell very short of these commendable intentions, and the tests came under immense public scrutiny and criticism.
Indubitably, like any standardised test, NAPLAN has its limitations, which have been extensively explored and, indeed, criticized by educators and researchers since its inception (see for example, Johnston [29], Rose et al. [32]). Early criticisms of NAPLAN suggested it was disconnected from the curriculum. This was addressed in 2016 when NAPLAN assessments were mapped against the Australian Curriculum in English and Mathematics to “align the test questions and constructs to the Australian Curriculum…and to reflect the dual delivery mode of NAPLAN, online and paper”, ref. [33] (para. 2).
As Lingard et al. [34] noted, the widespread criticisms of the tests included the many unintended consequences of NAPLAN testing, which in some respects may actually reduce students’ achievement in both literacy and numeracy due to the narrow knowledge and skill foci of the tests. One of the main criticisms is that many important aspects of learning are not measured by NAPLAN, meaning that “what counts the most cannot be counted”, ref. [29] (p. 26). These criticisms are often played out annually in the media at NAPLAN testing and reporting times, and include critiques of the ways the data are used (e.g., school comparison league tables), that the tests narrow the curriculum focus to specific knowledge and skills that will be assessed, teaching to the test (e.g., teaching only the requisite skills and knowledge assessed by the tests), declines in students’ intrinsic motivation, inability to adequately use the data to address student needs, and increased stress for both students and teachers [34]. There is also some evidence that more attention is provided in class to students who are thought to be able to achieve better results (when compared with their previous NAPLAN results), and consequently high and low achieving students may miss out on additional support from teachers [35]. Evidence also suggests that the results from the testing are not readily available in a timely fashion, so the data are not as useable as they could be in terms of aiming to improve teaching and learning (as results are released towards the end of the school year) [36]. However, this is changing from 2023, with results expected to be available by July each year.
Criticisms have also arisen over the inappropriate use of NAPLAN results (see for example, Wu and Hornsby [37]), which are regularly trialed in the media—in particular, the use of controversial so-called league tables on the federal government’s website MySchool. The league tables compare NAPLAN results of diverse state and territory government schools, private schools, Catholic schools, and independent schools against each other. This practice has made NAPLAN a particularly high-stakes test for many teachers, schools, and some students and parents [32,38]. League tables still exist; however, schools are now compared with supposedly more ‘like schools’ in terms of similar socio-economic profile; whether this is any better or not, only time and data use will tell.

5. Australian School Processes for Identifying Giftedness

A review of Australian education jurisdiction websites suggests an array of assessment practices used by schools to identify giftedness and talent (Table 1), such as parent nominations, psychometric assessments, teacher checklists, schoolwork, school reports, and standardised achievement tests, such as NAPLAN. For this review, data were collected from the eight state and territory jurisdiction websites based on their gifted education policy and practices for identifying gifted and talented students. The data collection process consisted of a web search for each education jurisdiction, based on search terms like “Australian Capital Territory education gifted and talented”, and then locating each respective state’s or territory’s education department policy, and/or advice to schools about suitable instruments and methods for the identification of these students. The sources of these data and results are presented in Table 1 under the Source/s column.
Four out of the eight states and territories specifically mention NAPLAN as an identification tool, while others infer NAPLAN could be used as an achievement (talent) assessment (e.g., achievement tests).
Interestingly, two of the four states and territories specifically distinguish NAPLAN as an achievement test, and/or list NAPLAN under talent (high performance) assessments, recognising the distinction between giftedness and talent evident in Gagné’s model. It is heartening to see from the findings presented in Table 1 that most states and territories suggest using data from multiple sources in identifying giftedness, including both objective and subjective measures (i.e., comprehensive identification). However, whether these comprehensive identification practices filter down from policy to school practices is a question for another day.
Comprehensive identification practices refer to the use of multiple measures to identify giftedness and/or talent, with the expectation that appropriate educational support will follow identification. These practices should be accessible, equitable, and comprehensive to make sure identification mechanisms are as broad as possible to “triangulate information from multiple sources”, ref. [52] (p. 113). Comprehensive assessment includes “norm-based, psychometrically sound, comprehensive intelligence and [individual] achievement tests and measures in all areas of suspected strengths” [53] (p. 113) and are particularly useful for identifying twice-exceptional students (gifted or talented students with disabilities). A comprehensive assessment usually includes a psychometric assessment (e.g., WISC-V), and a range of other individually administered assessments of achievement (e.g., the Wide Range Assessment of Memory and Learning-WRAML, and the Wechsler Individual Achievement Test-WIAT) [54].
Foley Nicpon et al. [55] state that comprehensive individualized identification practices should employ “an intra-individual, rather than inter-individual approach towards ability and achievement” (p. 7) (i.e., from an individual’s own results), especially for twice-exceptional students. The important point here lies with the intra-individual approach to identification, unlike NAPLAN, which predominately focuses on inter-individual approaches (i.e., comparison of results between different students and different educational contexts).

6. Discussion

The use of NAPLAN as an identification tool for giftedness is commonly evident (or implied) across Australian educational jurisdictions. In the gifted education context, the main problem is in using NAPLAN results to identify giftedness: NAPLAN is an achievement test—at best identifying some narrow aspects of academic talent—rather than an assessment of potential (i.e., giftedness). The fallacy of using NAPLAN data for identifying giftedness will be delineated in this section, and the key points are summarized in Figure 1.

6.1. The Fallacy of Using NAPLAN Data to Identify Giftedness

There is evidence that Australian educational jurisdictions are advocating for the use of NAPLAN results for identifying gifted students as well as talented students. Although there is some evidence at this system level that there is a distinction between gifted as potential and achievement as talent (see Table 1). Nevertheless, there is evidence to suggest that NAPLAN results at the individual student level are being used for identifying giftedness to drive selection of students for gifted extension programs and enrichment programs, and also for entry into selective schools and private schools (see Table 1). NAPLAN predominantly focuses on inter-individual assessment approaches—school, state, and national comparisons—unless achievement across an individual student’s NAPLAN results over successive year levels is accessible (i.e., comparison of an individual’s results to prior NAPLAN achievement across Grades 3, 5, 7 and 9).
It is evident from school websites that some schools are using NAPLAN data as part of ‘general’ entry requirements (which seems particularly prevalent in private and independent schools), and for entry into selective schools (government schools that accept students based on academic achievement) [56,57]. Some Australian schools explicitly state on their websites that entry into gifted programs and enrichment classes requires NAPLAN results, often along with some other measures of achievement, such as results from an entry exam [56,57]. Furthermore, ACARA recognizes this in their advice to parents, stating that “Some schools may ask for NAPLAN reports…as part of their admissions process. NAPLAN assessments are not designed to be a school admission test”, ref. [58] (p. 2).
As a standardised achievement test, NAPLAN relies heavily on taught and acquired knowledge and skills, meaning it is also not likely to identify underachieving talented students [59]. Indeed, the majority of gifted student participants (5 out of 6) in Haines’s [60] study showed below average school results in NAPLAN across literacy and numeracy, while potentially impacted by disabilities (e.g., learning disabilities). These findings present further evidence of the problems of relying on NAPLAN data to identify giftedness or talent. Furthermore, it is well-recognized that Australian students underachieve in both NAPLAN and the OECD Programme for International Student Assessment (PISA) testing [61]. One of the problems with underachievement is that these students will not reach talent-level competencies [3], so inevitably if NAPLAN and other achievement measures are being used for identification, these students will be missed for talent development programs. There is, therefore, a real concern about using NAPLAN for the identification of students who are underachieving/at-risk of underachieving, and for potentially identifying students from traditionally underserved populations (e.g., low socio-economic backgrounds), as either gifted or talented. Indeed, Goss and Sonnemann [61] found that “bright students from poor backgrounds make less progress in total (5 years 10 months) than low achievers with highly educated parents (6 years 6 months) between Year 3 [Grade 3] and Year 9 [Grade 9]” (p. 28); although they did not define what was meant by ‘bright’ students, the inference is about potential, or giftedness.
Moreover, the national minimum standards (NMS) for NAPLAN are set very low. For example, a student in Grade 9 “can meet the NMS even if they are performing below the typical Year 5 [Grade 5] student. They can be a stunning four years behind their peers”, ref. [61] (p. 2), yet appear to be meeting the NMS. This has immense implications for using NAPLAN as a gifted or talented identification instrument when comparing students and student achievement on the tests (inter-individual, school-wide and national comparisons). With ‘bright’ students in disadvantaged schools showing the biggest learning gap with “high achievers in disadvantaged schools make[ing] less [emphasis in original] progress than low achievers in high advantage schools over the six years”, ref. [61] (p. 2). Using NAPLAN for identification thus may even further disadvantage already disadvantaged ‘bright’ students from low socio-economic backgrounds.
Likewise, the restricted curriculum assessed in NAPLAN (e.g., writing persuasive or narrative text types) presents a potentially serious risk in that the curriculum, and subsequent teaching (i.e., teaching to the test), is being restricted to topics and concepts that are liable to be assessed in NAPLAN tests [62]. The implication of this is that gifted and talented students are not being extended by school curricula as they likely will not be able to focus on higher order concepts (e.g., mathematical goals and outcomes). The NAPLAN writing test tends to rely on the narrowness of formulaic writing to address the test structure [63], stifling creativity in the process and the teaching of writing, which has “subsumed the development of [students’] imaginative capacity”, ref. [64] (p. 33). This observation adds further weight to the fallacy of using NAPLAN in identifying giftedness, because identification practices should be aligned with the characteristics and domains of giftedness (i.e., Gagné’s aptitudes), and aligned with the characteristics and fields of talents (i.e., Gagné’s competencies in specific fields of human endeavor). If identification practices are not thus aligned, then it is unlikely giftedness and/or talent can be identified (according to Gagné’s definitions).
Moreover, NAPLAN tests have are reported to have a large margin of error; that is, a large variability in a student’s test results compared to that individual taking similar tests [65]. Reportedly, results could potentially be 12% higher or lower at the individual student level, with variations in results said to be as much as ±5.2, where the standard error of measurement (an estimate of how repeated measures of an individual’s skills on the same test tend to be distributed around a person’s ‘true’ score) is reported as 2.6 standard deviations [66]. Additionally, the mean/median true value has been reported as a confidence interval of 90% [67], meaning that more caution is needed when using the results. These confidence intervals and margins of error are important reminders of some further limitations of NAPLAN data.

6.2. Evidence of NAPLAN Use in Identification of Giftedness

Most importantly, when identifying giftedness and talent, the definition of giftedness and talent being used (and the operationalization of these definitions) needs to align with identification practices, assessment instruments, and, programming that schools provide (e.g., differentiated instruction) [53]. Thus, if educational jurisdictions and schools are using Gagné’s definitions, then NAPLAN is most unsuitable for identifying giftedness because it only assesses achievement (i.e., talent) in narrow areas of knowledge and skills. NAPLAN cannot, nor was it designed to identify aptitudes or talents. However, it may identify narrow academic skills related to English (e.g., writing, reading, language conventions), and narrow academic skills related to numeracy presented in the tests (e.g., specific areas of mathematical knowledge, algebraic reasoning, measurement).
Indeed, the Parliament of Victoria Education and Training Committee Inquiry (henceforth the Inquiry) into the education of gifted and talented students [68] found that NAPLAN was a common practice used by schools for identifying gifted students, with schools increasingly relying on data from NAPLAN results to identify “student potential” (p. 79). Indeed, the Inquiry found that there were “no systematic practices in place to identify gifted students in Victorian schools” (p. 79), a finding that likely has parallels in other states and territories.
The then Victorian Association for Gifted and Talented Children (VAGTC) Vice President, Mr. Michael Bond, commented to the Inquiry [68] that “up to 60 per cent of students will answer some of the more difficult questions on NAPLAN, so clearly this assessment has not been set up as an identification tool, nor was it designed to be that type of tool” (p. 85, reference 321). The VAGTC also identified with “great concern that some schools exclusively use NAPLAN results to ‘identify’ students for extension programs” (p. 85), which was becoming an “increasingly significant problem” (p. 85), and arguably remains a significant issue. There is some evidence from the review of Australian school websites that what are often touted as school giftedness programs are in actuality programs for high achieving students, rather than programs for developing the talents of gifted students. This further problematizes conceptions of giftedness and talent at the school level.
Overall, the Victorian Inquiry [68] found that there was immense concern from many participants that schools placed a “heavy reliance” (p. 85) on NAPLAN results (as well as other achievement tests) to identify gifted students. This is particularly problematic because these tests provide little information about the characteristics of gifted and talented students, and they identify achievement rather than potential [68].
Preliminary results from a recent pilot study investigation of a random sample of schools across three educational jurisdictions (two states and one territory) showed most of the schools that detailed identification practices used NAPLAN results [66]. Less than half of schools sampled mentioned any identification practices at all, with little to no information about actual gifted identification practices being used. This suggests some potential for NAPLAN continuing to be used in these schools for identification purposes. For example, some school website content used nebulous terms, such as “objective measures” and/or “standardised assessments” to identify gifted students. This suggests that NAPLAN may potentially still be used in these schools [69]. While these results are not conclusive of the widespread use of NAPLAN results in gifted identification, they are suggestive of three main issues: (1) There is limited transparent and publicly accessible information about identification practices that schools are using. (2) Where identification practices were specified on school websites and in documentation on those sites, there was evidence of the widespread use of NAPLAN results for the identification of giftedness. (3) A significant proportion of schools did not specify any identification practices on their websites, or within annual reports or other documentation available on their websites. There is need for clarity and transparency about decisions being made with regards to identifying and supporting the educational needs of these students. Identification is not an end in and of itself, it is undertaken to provide students with more targeted learning experiences through differentiation and personalization [70].

6.3. Comprehensive Identification Practices and the Potential Role of NAPLAN

NAPLAN may have some use in identifying intellectual (academic) talent when used as a part of a comprehensive identification approach. Indeed, the Australian Capital Territory (ACT) was one educational jurisdiction that had clarity between assessments of giftedness (as potential), and assessments of talent (as achievement). At least in the ACT there is evidence to suggest a clear understanding of Gagné’s differentiation between giftedness and talent. Achievement assessments that the ACT suggested for identifying talent were the Test of Reading Comprehension (TORCH) [71], and the Progressive Achievement Tests (PAT) [72]. The TORCH can be used to identify a student’s level of reading comprehension, to measure their progress in reading, and to identify any skills needing further instruction; it is suitable for students in Grade 3 to Grade 10 [72]. This test can also be used to track a student’s progress over time, and is a useful intra-individual test. PAT assessments consist of a suite of tests covering mathematics achievement (PAT-M), reading comprehension and word knowledge (PAT-R), writing, spelling, punctuation, and grammar (PAT-SPG) [72]. These tests can be used collectively or separately to assess individual student’s knowledge and achievement in order to monitor intra-individual progress over time [72].
There may be some promise in the proposed transition to NAPLAN online testing, for using it as part of comprehensive assessments for identifying talent. For example, tailored online testing could allow for students to be tested on a range of texts, from short and simple to longer and more complex texts [26]. The more adaptive nature of these tests, which are reportedly tailored to an individual student’s responses [26], may have the capacity to increase the test ceiling. Perhaps the transition to NAPLAN online testing will offer some avenue for use of NAPLAN as one tool (from a suite of many) for identifying academic talent (as exemplified by achievement). However, potential issues with adaptive test types for gifted and talented students can be that these students can answer easy test questions incorrectly, and harder, more challenging ones correctly (if given the opportunity to access harder questions on tests). The adaptive test may not necessarily adapt, if the system perceives a student is answering easy questions incorrectly, it will likely adapt to presenting easier ones, rather than harder ones. This will likely not give an accurate picture of where the student’s actual achievement levels lie in terms of the test items because they were never presented with harder questions during the testing to demonstrate their ability.
Overall, the aforementioned issues mean that NAPLAN should not be used to identify giftedness, since giftedness is about potential, not achievement (using Gagné’s definitions). So, why is it then that some schools are using NAPLAN results in this way? It is conceivable that schools are increasingly relying on NAPLAN data to identify gifted students because they do not have timely and appropriate access to much needed testing instruments, or to suitably qualified personnel to administer comprehensive assessments. Perhaps Australian schools do not have personnel who have time available and the capacity to undertake comprehensive identification practices. The answer may also lie in schools not being fully aware of the differences between Gagné’s conceptualizations of giftedness and talent in terms of how this applies to gifted programs, talent programs, and programs aimed at intellectually high achieving students; this is potentially a problem related to initial teacher education, educator in-service training, and ongoing teacher professional development.

7. Limitations

There are a number of limitations to findings discussed in this article. These will be outlined in this section.
The first limitation is that evidence of the use of NAPLAN in Australian government schools has been collected from outward facing public websites, and as such, there are limitations to data that is available in terms of actual in-school practices, and whether these follow the ascribed processes detailed on these jurisdictional websites.
The second limitation is in respect to the evidence gathered from disparate contexts (i.e., from the Inquiry, and school websites), which means it is commonly evident (or implied) across Australian educational jurisdictions that the use of NAPLAN for identifying gifted students may be widespread. Furthermore, there is a lack of detailed readily-available data about identification practices, despite continued reports of school-level use of NAPLAN results to identify gifted students.
Nevertheless, there is some preliminary data suggesting that NAPLAN, as an identification instrument, is being used to identify giftedness, at least in some schools. This confirms the findings of the Victorian Inquiry into the education of gifted and talented students [65], that NAPLAN results may be customarily used in schools for identifying gifted students. What is not yet known are the specific numbers of schools that are engaging in this practice. Thus, future research is needed to gauge this.

8. Recommendations for Future Research

For future research, it is recommended that in-depth data be gathered about the actual school use of NAPLAN data in identifying talented students. To this end, there are several avenues for further research in terms of the role that NAPLAN may or may not play, as part of a comprehensive identification approach in the Australian context.
First, there is a need to interrogate the potential of NAPLAN to inform an intra-individual factor as part of comprehensive practices for identifying talent; that is, to understand better how the results for individual students could be tracked from Grade 3 to Grade 9 testing, and then how these may be applied to inform identification practices for these students. This could then inform talent development programming, specifically aimed at tailoring learning to individual student needs.
Second, NAPLAN online tailored testing may conceivably offer some prospects for seeking out talented students. Future research may focus on this potential higher ceiling test, and how useful it could potentially be for identifying academic talent. This would inevitably assist in addressing the learning needs of some of Australia’s academically talented students so they are in a better position to fulfil their potential (whatever that may be).
Third, future research could review a sample of schools in different education jurisdictions across Australia to understand the extent to which, and how schools are using NAPLAN results for supporting talent development. This could inform an action agenda to provide a more specific evidence base for any future application of NAPLAN results for talent development.

9. Conclusions

The use of NAPLAN results by some Australian schools for identifying giftedness is particularly problematic. Furthermore, concern has been expressed about the substantial dependence schools currently place on achievement test results, such as NAPLAN, for identifying gifted students. Furthermore, the focus on acquired knowledge in NAPLAN testing may likely miss some gifted students, underachieving (talented) students, and potentially students from diverse cultural backgrounds, socio-economic backgrounds, and twice-exceptional students.
As suggested in Gagné’s [3] conceptualization of intellectual giftedness, evidence of actual achievement through using achievement tests, will be limited (or may be non-existent) because giftedness is not evidenced through achievement, but rather through potential [3]. It is apparent that gifted and talented identification practices need to be aligned with individual education jurisdiction and school definitions, conceptualizations, and practices of gifted and talented education, rather than confounding giftedness and talent as achievement. There is nothing inherently wrong with the intentions of the NAPLAN test, or standardised testing per se; it is definitely needed. Indeed, NAPLAN may be appropriate as part of a holistic comprehensive talent identification process, but emphasis should not be placed on the test results to identify giftedness (or even talent for that matter). The main problems lie in the way the data are being used, and misused, especially for identifying giftedness. Ultimately, as an achievement test, NAPLAN could only identify achievement in the restricted areas it assesses, rather than giftedness as potential.
In summary, NAPLAN assessments are not designed to be a gifted or talented identification tool, nor are they designed to be an admission test for schools, gifted programs, or extension programs. When used in isolation, or not as intended (i.e., as an identification tool for giftedness), NAPLAN results cannot provide a comprehensive view of a student’s learning or potential. NAPLAN should definitely not be used as a primary gifted identification instrument; it clearly is not an identification tool for finding gifted students. What NAPLAN results can potentially contribute is another piece to the jigsaw puzzle in relation to a student’s academic achievements and competencies as talent.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable. No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The author would like to acknowledge the generous support of the Arts, Education and Law group at Griffith University, in the awarding of an Academic Equity Development Program in support of this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Gagné, F. Differentiating Giftedness from Talent: The DMGT Perspective on Talent Development; Routledge: London, UK, 2021. [Google Scholar]
  2. Ronksley-Pavia, M. A model of twice-exceptionality: Explaining and defining the apparent paradoxical combination of disability and giftedness in childhood. J. Educ. Gift. 2015, 38, 318–340. [Google Scholar] [CrossRef]
  3. Gagné, F. Implementing the DMGT’s constructs of giftedness and talent: What, why and how? In Handbook of Giftedness and Talent Development in the Asia-Pacific; Smith, S.R., Ed.; Springer: Singapore, 2019; pp. 71–99. [Google Scholar]
  4. Ericsson, K.A. Training history, deliberate practice and elite sports performance: An analysis in response to Tucker and Collins review—What makes champions? Br. J. Sport. Med. 2013, 47, 533–535. [Google Scholar] [CrossRef] [PubMed]
  5. Callahan, C.M. The characteristics of gifted and talented students. In Fundamentals of Gifted Education: Considering Multiple Perspectives, 2nd ed.; Callahan, C.M., Hertberg-Davis, H.L., Eds.; Routledge: New York, NY, USA, 2018; pp. 153–166. [Google Scholar]
  6. Gagné, F. The DMGT: The Core of My Professional Career in Talent Development. 2020. Available online: https://gagnefrancoys.wixsite.com/dmgt-mddt/the-dmgt-in-english (accessed on 7 August 2022).
  7. NAGC. Position Statement—Redefining Giftedness for a New Century: Shifting the Paradigm. 2010. Available online: http://www.nagc.org.442elmp01.blackmesh.com/sites/default/files/Position%20Statement/Redefining%20Giftedness%20for%20a%20New%20Century.pdf (accessed on 2 March 2022).
  8. Peeters, B. Thou shalt not be a tall poppy: Describing an Australian communicative (and behavioral) norm. Intercult. Pragmat. 2004, 1, 71–92. [Google Scholar] [CrossRef]
  9. Ronksley-Pavia, M.; Grootenboer, P.; Pendergast, D. Privileging the voices of twice-exceptional children: An Exploration of lived experiences and stigma narratives. J. Educ. Gift. 2019, 42, 4–34. [Google Scholar] [CrossRef]
  10. Pfeiffer, S.I. Essential of Gifted Assessment, 1st ed.; Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  11. McBee, M.T.; Peters, S.J.; Miller, E.M. The impact of the nomination stage on gifted program identification: A comprehensive psychometric analysis. Gift. Child Q. 2016, 60, 258–278. [Google Scholar] [CrossRef]
  12. Flynn, J.R. Are We Getting Smarter? Rising IQ in the Twenty-First Century; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  13. Gould, S.J. The Mismeasure of Man; W. W. Norton: New York, NY, USA, 1996. [Google Scholar]
  14. Murdoch, S. IQ: A Smart History of a Failed Idea; Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  15. Weiss, L.G.; Saklofske, D.H.; Holdnack, J.A. WISC-V: Clinical Use and Interpretation; Academic Press: London, UK, 2019. [Google Scholar]
  16. Dai, D.Y.; Sternberg, R.J. Motivation, Emotion, and Cognition: Integrative Perspectives on Intellectual Functioning and Development; Lawrence Erlbaum Associates Inc. Publishers: Mahwah, NJ, USA, 2004. [Google Scholar]
  17. Renzulli, J.S. Reexamining the role of gifted education and talent development for the 21st Century. Gift. Child Q. 2012, 56, 150–159. [Google Scholar] [CrossRef]
  18. Sternberg, R.J. Is gifted education on the right path? In The SAGE Handbook of Gifted and Talented Education; Wallace, B., Sisk, D.A., Senior, J., Eds.; SAGE: London, UK, 2019; pp. 5–18. [Google Scholar]
  19. McIntosh, D.E.; Dixon, F.A.; Pierson, E.E. Use of intelligence tests in the identification of giftedness. In Contemporary Intellectual Assessment: Theories, Tests, and Issues; Flanagan, D.P., McDonough, E.M., Eds.; The Guilford Press: New York, NY, USA, 2018; pp. 587–607. [Google Scholar]
  20. NAGC. Position Statement: Use of the WISC-V for Gifted and Twice Exceptional Identification; NAGC: Washington, DC, USA, 2018. [Google Scholar]
  21. Sternberg, R.J.; Davidson, J.E. Conceptions of Giftedness, 2nd ed.; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  22. Newman, T.M.; Sparrow, S.S.; Pfeiffer, S.I. The Use of the WISC-IV in assessment and intervention planning for children who are gifted. In WISC-IV Clinical Assessment and Intervention, 2nd ed.; Prifitera, A., Saklofske, H., Weiss, L.G., Eds.; Academic Press: San Diego, CA, USA, 2008; pp. 217–272. [Google Scholar]
  23. Merrotsy, P. Gagné’s Differentiated Model of Giftedness and Talent in Australian education. Australas. J. Gift. Educ. 2017, 26, 29–42. [Google Scholar] [CrossRef]
  24. McBee, M.T.; Makel, M.C. The quantitative implications of definitions of giftedness. AERA Open 2019, 5, 1–13. [Google Scholar] [CrossRef]
  25. Australian Curriculum, Assessment and Reporting Authority (ACARA). NAP National Assessment Program. 2022. Available online: https://www.nap.edu.au/naplan/key-dates (accessed on 3 March 2023).
  26. Australian Curriculum, Assessment and Reporting Authority (ACARA). NAPLAN What’s in the Tests? 2016. Available online: https://www.nap.edu.au/naplan/whats-in-the-tests (accessed on 3 March 2023).
  27. Marks, G.N. Are school-SES effects statistical artefacts? Evidence from longitudinal population data. Oxf. Rev. Educ. 2015, 41, 122–144. [Google Scholar] [CrossRef]
  28. Cumming, J.J.; Dickson, E. Educational accountability tests, social and legal inclusion approaches to discrimination for students with disability: A national case study from Australia. Assess Educ. 2013, 20, 221–239. [Google Scholar] [CrossRef]
  29. Johnston, J. Australian NAPLAN testing: In what ways is this a ‘wicked’ problem? Improv. Sch. 2017, 20, 18–34. [Google Scholar] [CrossRef]
  30. Australian Curriculum, Assessment and Reporting Authority (ACARA). NAPLAN: National Assessment Program—Literacy and Numeracy Infographic (V4-2). Available online: https://docs.acara.edu.au/resources/Acara_NAPLAN_Infographic.pdf (accessed on 3 March 2023).
  31. Lange, T.; Meaney, T. It’s just as well kids don’t vote: The positioning of children through public discourse around national testing. Math. Educ. Res. J. 2014, 26, 377–397. [Google Scholar] [CrossRef]
  32. Rose, J.; Low-Choy, S.; Singh, P.; Vasco, D. NAPLAN discourses: A systematic review after the first decade. Discourse Stud. Cult. Politics Educ. 2020, 41, 871–886. [Google Scholar] [CrossRef]
  33. Curriculum, A.; Authority, A. Connection to the Australian Curriculum. 2016. Available online: https://www.nap.edu.au/naplan/connection-to-the-australian-curriculum (accessed on 3 March 2023).
  34. Lingard, B.; Thompson, G.; Sellar, S. National testing from an Australian perspective. In National Testing in Schools: An Australian Assessment; Lingard, B., Thompson, G., Sellar, S., Eds.; Routledge: London, UK, 2016; pp. 3–17. [Google Scholar]
  35. Klenowski, V.; Wyatt-Smith, C. The impact of high stakes testing: The Australian story. Assess Educ. 2012, 19, 65–79. [Google Scholar] [CrossRef]
  36. Thompson, G.; Mockler, N. Principals of audit: Testing, data and ‘implicated advocacy’. J. Educ. Adm. Hist. 2016, 48, 1–18. [Google Scholar] [CrossRef]
  37. Wu, M.; Hornsby, D. Inappropriate uses of NAPLAN results. Pract. Prim. 2014, 19, 16–17. [Google Scholar]
  38. Lingard, B. Policy borrowing, policy learning: Testing times in Australian schooling. Crit. Stud. Educ. 2010, 51, 129–147. [Google Scholar] [CrossRef]
  39. Government, A. ACT Gifted and Talented Students Policy Appendix B: Identification Instruments. ACT Gifted and Talented Education Policy. 2021. Available online: https://www.education.act.gov.au/publications_and_policies/School-and-Corporate-Policies/access-and-equity/gifted-and-talented/gifted-and-talented-students-policy (accessed on 3 March 2023).
  40. Government ACT. ACT Gifted and Talented Students Policy. 2021. Available online: https://www.education.act.gov.au/publications_and_policies/School-and-Corporate-Policies/access-and-equity/gifted-and-talented/gifted-and-talented-students-policy (accessed on 3 March 2023).
  41. NSW Government. High Potential and Gifted Education Policy. 2019. Available online: https://education.nsw.gov.au/policy-library/policies/pd-2004-0051 (accessed on 3 March 2023).
  42. NSW Government. Assess and Identify. 2019. Available online: https://education.nsw.gov.au/teaching-and-learning/high-potential-and-gifted-education/supporting-educators/assess-and-identify#Assessment1 (accessed on 3 March 2023).
  43. Department of Education, Northern Territory Government of Australia. Gifted and Talented Students (G&T). 2022. Available online: https://education.nt.gov.au/support-for-teachers/student-diversity/gifted-and-talented-students (accessed on 3 March 2023).
  44. Department of Education, Queensland Government. Gifted and Talented Education. 2018. Available online: https://education.qld.gov.au/parents-and-carers/school-information/life-at-school/gifted-and-talented-education#:~:text=All%20Queensland%20state%20schools%20are,teachers%20in%20their%20local%20area (accessed on 3 March 2023).
  45. Department of Education, Queensland Government. P-12 Curriculum, Assessment and Reporting Framework (CARF) (Revised February 2022). 2022. Available online: https://education.qld.gov.au/curriculums/Documents/p-12-curriculum-assessment-reporting-framework.pdf (accessed on 3 March 2023).
  46. Department of Education, Government of South Australia. Student Support Programs—Gifted and Talented Education. 2020. Available online: https://www.sa.gov.au/topics/education-and-learning/curriculum-and-learning/student-support-programs (accessed on 3 March 2023).
  47. Department of Education, Tasmanian Government. Extended Learning for Gifted Students Procedure (Version 1.1—9/03/2022). 2022. Available online: https://publicdocumentcentre.education.tas.gov.au/library/Document%20Centre/Extended-Learning-for-Gifted-Students-Procedure.pdf (accessed on 3 March 2023).
  48. Department of Education, State Government of Victoria. Whole School Approach to High Ability. 2019. Available online: https://www.education.vic.gov.au/school/teachers/teachingresources/high-ability-toolkit/Pages/whole-school-approach-to-high-ability.aspx (accessed on 3 March 2023).
  49. Department of Education, State Government of Victoria. Identifying High-Ability. 2022. Available online: https://www.education.vic.gov.au/school/teachers/teachingresources/high-ability-toolkit/Pages/identifying-high-potential.aspx (accessed on 3 March 2023).
  50. Department of Education, Government of Western Australia. Gifted and Talented in Public Schools. 2018. Available online: https://www.education.wa.edu.au/dl/nl1dmpd (accessed on 3 March 2023).
  51. School Curriculum and Standards Authority, Government of Western Australia. Guidelines for the Acceleration of Students Pre-primary–Grade 10. 2020. Available online: https://k10outline.scsa.wa.edu.au/__data/assets/pdf_file/0010/637768/Guidelines_for_the_acceleration_of_students_Pre-primary_to_Grade_10.PDF (accessed on 3 March 2023).
  52. Bracken, B.A.; Brown, E.F. Behavioral identification and assessment of gifted and talented students. J. Psychoeduc. Assess 2006, 24, 112–122. [Google Scholar] [CrossRef]
  53. Corwith, S.; Johnsen, S.; Cotabish, A.; Dailey, D.; Guilbault, K. Pre-K-Grade 12 Gifted Programming Standards. National Association for Gifted Children. 2019. Available online: http://nagc.org.442elmp01.blackmesh.com/resources-publications/resources/national-standards-gifted-and-talented-education/pre-k-grade-12 (accessed on 6 March 2023).
  54. Ronksley-Pavia, M.; Ronksley-Pavia, S. The role of primary health care providers in supporting a gifted child. Australian Journal of General Practice. Forthcoming.
  55. Nicpon, M.F.; Allmon, A.; Sieck, B.; Stinson, R.D. Empirical investigation of twice-exceptionality: Where have we been and where are we going? Gift. Child Q. 2011, 55, 3–17. [Google Scholar] [CrossRef]
  56. Latifi, A. NAPLAN Results Being Used for Competitive Entry Programs. Illawarra Mercury. 30 April 2019. Available online: https://www.illawarramercury.com.au/story/6095472/naplan-results-being-used-for-competitive-entry-programs/ (accessed on 6 March 2023).
  57. Long, L.C.; Barnett, K.; Rogers, K.B. Exploring the relationship between principal, policy, and gifted program scope and quality. J. Educ. Gift. 2015, 38, 118–140. [Google Scholar] [CrossRef]
  58. The National Assessment Program Literacy and Numeracy (NAPLAN); Australian Curriculum, Assessment and Reporting Authority (ACARA) Frequently Asked Questions—Individual Student Reports. 2022. Available online: https://nap.edu.au/docs/default-source/default-document-library/faq-individual-student-report.pdf (accessed on 20 March 2023).
  59. Jackson, R.L.; Jung, J.Y. The identification of gifted underachievement: Validity evidence for the commonly used methods. Br. J. Educ. Psychol. 2022, 92, 1133–1159. [Google Scholar] [CrossRef] [PubMed]
  60. Haines, M.E. Opening the Doors of Possibility for Gifted/High-Ability Children with Learning Difficulties: Preliminary Assessment Strategies for Primary School Teachers; University of New England: Armidale, Australia, 2017. [Google Scholar]
  61. Goss, P.; Sonnemann, J. Widening Gaps: What NAPLAN Tells Us about Student Progress. 2016. Available online: http://www.grattan.edu.au/ (accessed on 21 August 2020).
  62. Dharmadasa, K.; Nakos, A.; Bament, J.; Edwards, A.; Reeves, H. Beyond NAPLAN testing: Nurturing mathematical talent. Aust. Math. Teach. 2014, 70, 22–27. Available online: https://files.eric.ed.gov/fulltext/EJ1093267.pdf (accessed on 21 August 2020).
  63. McGaw, B.; Louden, W.; Wyatt-Smith, C. November 2019 NAPLAN Review Interim Report Contents; NAPLAN: Sydney, Australia, 2020. [Google Scholar]
  64. Carey, M.D.; Davidow, S.; Williams, P. Re-imagining narrative writing and assessment: A post-NAPLAN craft-based rubric for creative writing. Aust. J. Lang. Lit. 2022, 45, 33–48. [Google Scholar] [CrossRef]
  65. Wu, M. Inadequacies of NAPLAN Results for Measuring School Performance. Available online: https://www.aph.gov.au/DocumentStore.ashx?id=dab4b1dc-d4a7-47a6-bfc8-77c89c5e9f74 (accessed on 11 April 2023).
  66. Watson, K.; Handal, B.; Maher, M. NAPLAN Data Is not Comparable across School Years. The Conversation. 2016. Available online: https://theconversation.com/naplan-data-is-not-comparable-across-school-years-63703 (accessed on 11 April 2023).
  67. Australian Curriculum, Assessment and Reporting Authority (ACARA). My School Fact Sheet: Guide to Understanding Gain. 2015. Available online: https://docs.acara.edu.au/resources/Guide_to_understanding_gain.pdf (accessed on 11 April 2023).
  68. Parliament of Victoria, Education and Training Committee. Inquiry into the Education of Gifted and Talented Students; Parliamentary Paper No. 108, Session 2010–2012. 2012. Available online: https://www.parliament.vic.gov.au/images/stories/committees/etc/Past_Inquiries/EGTS_Inquiry/Final_Report/Gifted_and_Talented_Final_Report.pdf (accessed on 6 March 2023).
  69. Ronksley-Pavia, M. Gifted Identification Practices across a Random Sample of Australian Schools: A Pilot Project. Working Paper. 2023. [Google Scholar]
  70. Ronksley-Pavia, M. Personalised learning: Disability and gifted learners. In Teaching Primary Years: Rethinking Curriculum, Pedagogy and Assessment; Pendergast, D., Main, K., Eds.; Allen & Unwin: Crows Nest, Australia, 2019; pp. 422–442. [Google Scholar]
  71. ACER. Tests of Reading Comprehension (TORCH). 2013. Available online: https://shop.acer.org/tests-of-reading-comprehension-torch-third-edition.html (accessed on 6 March 2023).
  72. ACER. Progressive Achievement Tests (PAT). 2022. Available online: https://www.acer.org/au/pat (accessed on 6 March 2023).
Figure 1. Key reasons why NAPLAN is unsuitable for identifying giftedness.
Figure 1. Key reasons why NAPLAN is unsuitable for identifying giftedness.
Education 13 00421 g001
Table 1. An overview of gifted and talented identification practices across Australian states and territories from website searches.
Table 1. An overview of gifted and talented identification practices across Australian states and territories from website searches.
State/TerritorySpecific Gifted Education PolicyIdentification NotationsIdentification Practices Assessment Types ListedExamples of Assessment Instruments ListedSource/s
Australian Capital Territory (ACT)YesUsing data from multiple subjective and objective assessment measures of ability and achievement to identify potentially gifted and talented students.Parent nomination checklists
Teacher nomination checklists
External psychometric testing
School-based abilities testing
Standardised achievement tests
Parent observations
Teacher observations
School work/reports.
Qualitative:
Cognitive and Affective Rating Scales
Student work and assessments
Interviews
Quantitative:
WISC-V, SB-5,
Raven’s, Naglieri, PAT, TORCH,
NAPLAN,
Acceleration Assessments:
IAS-Iowa Acceleration Scale,
Renzulli Scales,
Creativity Tests:
Remote Association Task,
Khatena-Torrance
Tests for Artistic Ability and Talent: Clark’s Drawing Abilities,
Barron-Welsh Art Scale [39]
ACT Gifted and Talented Students Policy [40]
Appendix B: Identification Instruments [39]
New South Wales (NSW)YesObjective, valid and reliable measures, as part of formative assessment, should be used to assess high potential and gifted students and identify their specific learning needs [41]Ability tests, achievement tests, adaptive tests,
rating scales
performance-based assessments, dynamic assessments,
growth modelling assessments
None listedHigh Potential and Gifted Education Policy [42]
Northern Territory (NT)YesThe department uses data and evidence to identify intellectual giftedness and/or academic talents by using both qualitative and quantitative identification tools [43]Gifts (high potential):
Rating scales, Checklists, Nominations, Standardised cognitive assessments.
Talents (high performance):
NAPLAN, Student achievement data/school reports, Portfolios of student work, Parent/teacher nomination
None listedGifted and talented students (G and T) [43]
Queensland (Qld)No *All Queensland state schools are committed to meeting learning needs of students who are gifted…The Department of Education has many awards, programs and initiatives to recognise students who demonstrate outstanding talents and show potential in academic and extracurricular activities [43] None listed, no specific gifted and talented education policy–although P-12 CARF suggests use of “school wide processes to identify groups and individuals who require tailored support” [44]None listed (no specific gifted and talented education policy).Gifted and talented education [44],
P-12 Curriculum, assessment and reporting framework (CARF) [45]
South Australia (SA)No *Government schools and preschools have programs for gifted and talented children as part of the standard curriculum. Specialised courses and programs: A number of schools offer specialised courses and programs for students: with a special interest who are well ahead of their peers demonstrating talent in a particular area.None listed.None listedStudent Support Programs–Gifted and talented education [46]
Tasmania (Tas)YesImplement processes to identify and make appropriate provision for gifted students in their school, including acceleration procedures and early entry to kindergarten [47]None listed.Early Entry to School WPPSI IV test required. None listed for other year levels.Extended Learning for Gifted Students Procedure, Version 1.1 [47]
Victoria (Vic)No *^
(Have a ‘high-ability toolkit and related webpages)
Identification should:
begin as early as possible, be flexible and continuous, utilise many measures, highlight indicators of underachievement, be appropriate to age and stage of schooling.
List of measures:
response to classroom activities, self-nomination,
peer-nomination,
teacher nomination, parent nomination, competition results, above-level tests,
standardised tests of creative ability,
standardised cognitive assessments (IQ tests) observations and anecdotes,
checklists of traits
interviews (child or parent), academic grades.
Assessment data (formative & summative):
classroom-based assessment samples (e.g., tests, assignments), standardised achievement assessments (e.g., NAPLAN or the Progressive Achievement Tests–Reading/Mathematics),
teacher observations, and/or other qualitative information, projects or portfolios, past assessment results (e.g., curriculum levels–previous year), above-level tests.
NAPLAN, Progressive Achievement Tests–Reading/Mathematics, Silverman’s checklist and exemplar,
Merrick’s checklist and exemplar, Frasier’s TABs and exemplar, Assessment audit template
Whole school approach to high ability [48]
Identifying high-ability [49]
High ability toolkit [48]
Western Australia (WA)YesPrincipals will plan and implement strategies to identify gifted and talented students.Identification processes for gifted and talented students should:
Be inclusive, be flexible and continuous, use information from a variety of sources, including classroom teacher observation and assessment, as well as knowledge obtained from other people (e.g., parents and peers).
Help teacher identify a student’s intellectual strengths, artistic or linguistic talents, and social and emotional needs. Direct quality of the teaching and learning environment [50]
None for identification of gifted/talented students. For acceleration of students pre-primary to Year 10-examples:
performance in classwork and classroom teacher observation, school assessments, information from other sources, such as parents and peers, IQ tests and psychological assessments, other standardised achievement tests, NAPLAN performance, Iowa Acceleration Scale, information about social-emotional readiness [51]
Gifted and Talented in Public Schools [50],
Guidelines for the Acceleration of Students Pre-primary–Year 10 [51]
* No readily found or available published policy document. ^ Victoria have extensive publicly accessible information about “high-ability” students, see ‘Source/s’ column in the table for further details.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ronksley-Pavia, M. The Fallacy of Using the National Assessment Program–Literacy and Numeracy (NAPLAN) Data to Identify Australian High-Potential Gifted Students. Educ. Sci. 2023, 13, 421. https://doi.org/10.3390/educsci13040421

AMA Style

Ronksley-Pavia M. The Fallacy of Using the National Assessment Program–Literacy and Numeracy (NAPLAN) Data to Identify Australian High-Potential Gifted Students. Education Sciences. 2023; 13(4):421. https://doi.org/10.3390/educsci13040421

Chicago/Turabian Style

Ronksley-Pavia, Michelle. 2023. "The Fallacy of Using the National Assessment Program–Literacy and Numeracy (NAPLAN) Data to Identify Australian High-Potential Gifted Students" Education Sciences 13, no. 4: 421. https://doi.org/10.3390/educsci13040421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop