Next Article in Journal
Incremental Processing of Laughter in Interaction
Previous Article in Journal
Hybrid Compound Formation in Classical and Modern Papiamentu
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Animation in Speech, Language, and Communication Assessment of Children: A Scoping Review

by
Triantafyllia I. Vlachou
1,
Maria Kambanaros
2,
Arhonto Terzi
1 and
Voula C. Georgopoulos
1,3,*
1
Department of Speech and Language Therapy, University of Patras, 26504 Patras, Greece
2
The Brain and Neurorehabilitation Lab, Department of Rehabilitation Sciences, Cyprus University of Technology, 3041 Limassol, Cyprus
3
Primary Health Care Laboratory, School of Health Rehabilitation Sciences, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
Languages 2026, 11(2), 24; https://doi.org/10.3390/languages11020024
Submission received: 22 October 2025 / Revised: 12 January 2026 / Accepted: 22 January 2026 / Published: 29 January 2026

Abstract

Animation has been used to assess speech, language, and communication skills in children. We aimed to map and synthesize relevant research addressing how and when animation is used for assessment purposes in speech–language pathology practice. Four databases were searched, yielding 18 studies that met the inclusion criteria. Data was extracted on study design, objectives, participant characteristics, results, assessment areas, purposes of animation use, underlying theoretical and research bases, and technical features. Theoretical grounding for children’s perception of animation was not evident in the studies, while several studies showed research foundations for its use in speech–language pathology assessment. Various animations were used for diverse purposes and research goals, primarily involving typically developing children and fewer clinical samples. All studies focused on language assessment. The diversity in animation research precludes conclusions regarding best practices in use of animation in speech–language pathology assessment. An initial evidence base was established, documenting research approaches, the effects of animation on language and cognition, observed behaviors, the performance of clinical samples, and psychometric properties of the assessment tools. Limitations, knowledge gaps, and future research are discussed.

1. Introduction

This scoping review examines the evidence on how and when animation is used to assess children with potentially atypical speech, language, and communication skills, given the diverse clinical populations that speech and language pathologists (SLPs) serve. In this work we adopt the definition by Betrancourt and Tversky (2000), who describe animation as “any application that generates a sequence of frames, wherein each frame represents a modification of the preceding one, and where the sequencing of frames is determined by either the designer or the user” (Betrancourt & Tversky, 2000, p. 313).

1.1. Theoretical Approaches to Animation

Animation is a relatively new element in the long history of visual communication (Schnotz & Lowe, 2008). In cognitive and educational psychology, it has been widely studied as a graphic feature in educational materials to communicate knowledge (e.g., chemistry, biology) for typical populations (Höffler & Leutner, 2007; Berney & Bétrancourt, 2016; Ploetzner et al., 2020). Tversky et al. (2002) questioned the learning value of animation, citing issues such as increased cognitive load on working memory and distraction. They argued that animations should follow the Gestalt Principles of Congruence (i.e., cognitive congruence among visual representation and concept) and of Apprehension (i.e., the structure and content of the representation should be easily and accurately perceived and comprehended). As a result, animations of events should be slow, clear, schematic and less realistic. Finally, they suggested that research on animated versus static graphics must use equivalent content and methods for valid results.
Schnotz and Lowe (2008) discussed the ability of animation to convey temporal information in addition to the visuospatial content of static images, reducing the need for learners to infer temporal changes. They argued that both animated and static images are processed by the same perceptual and cognitive system, which can adapt to both dynamic and stable stimuli. Thus, animation can enhance learning when its design is aligned with the requirements of cognition and processing.
Animation has also been examined through frameworks such as cognitive load theory and the cognitive theory of multimedia learning (Mayer, 2001; Mayer & Moreno, 2002), as well as the animation processing model (Lowe & Boucheix, 2017), each offering design strategies to optimize learning outcomes and to minimize the cognitive load.
In the field of speech–language pathology, Jagaroo and Wilkinson (2008), in a visual cognitive neuroscience approach, discussed how typical individuals perceive motion in animations and its potential role as symbols in Augmentative and Alternative Communication (AAC) systems. They noted the lack of an evidence-based framework for clinical decisions and recommended further research focusing on motion processing profiles of clinical populations (e.g., autism) before applying animations in clinical practice.

1.2. Research Involving Animation in Speech–Language Pathology

Intervention research for children in speech–language pathology has been influenced by different forms of animation. For example, numerous multimedia applications featuring animations have been developed to engage children in language tasks helping them to imitate and retain new linguistic information (e.g., Folksman et al., 2013; Tommy & Minoi, 2016). Computer-animated tutors (i.e., animated characters that talk) have also been used in children with autism (Mulholland et al., 2008; Massaro & Bosseler, 2006) or those with hearing loss (Massaro & Light, 2004). In several studies with autistic children, animation often served as the means of enhancing communication (Hetzroni & Tannous, 2004), improving gestural communication skills (So et al., 2016), or facilitating verb meaning acquisition (Horvath et al., 2018). However, Carter et al. (2014) found that children with autism responded better to human interaction than to animated characters in eliciting verbal and non-verbal communication.
Animation has been a primary focus of research in the field of AAC, providing substantial evidence of its iconicity properties and effectiveness as a symbol in studies involving typically developing (TD) children (e.g., Mineo et al., 2008; Schlosser et al., 2014; Harmon et al., 2014; McCarthy & Boster, 2018; Brock et al., 2022), as well as children with ASD (Horwitz et al., 2014; Schlosser et al., 2019; Choe et al., 2022) and intellectual disabilities (Fujisawa et al., 2011; Lee & Hong, 2013).
Animation has also been used for the development of various tasks, games, and tests for assessing children. For example, Hodges et al. (2017) developed the Monosyllable Imitation Test for toddlers that used animation to create an engaging and pragmatically motivating elicitation context. Similarly, Krok et al. (2022) created the Sentence Diversity Priming Task using animation to engage young children and hold their attention. Also, Diehm et al. (2020) investigated the effects of animated and static stories on narrative skills of preschoolers. Such studies motivated the present scoping review, which aims to map relevant research on child assessment using animations. This issue is particularly important given that the value of animation in the education of adolescents and adults has long been questioned (e.g., Tversky et al., 2002), with strong arguments concerning increased cognitive load due to the transient nature of animated information and the resulting potential for attentional distraction. The increasingly common use of animation in assessment tasks for children with speech and language pathologies/disorders warrants careful consideration, given that these children may present with comorbid conditions and/or difficulties that affect both their developing cognitive skills and their ability to sustain attention. Consequently, important questions arise regarding the design, use, and impact of animation during the assessment process, as well as the interpretation of assessment outcomes. Accordingly, the findings will explore the rationale for using animation in the development of assessment tools, the clinical populations for which these tools were developed, and the skills the tools were designed to evaluate. This evidence may reveal knowledge gaps and inform future research (Munn et al., 2018; Tricco et al., 2018) regarding best practices in developing assessment measures with animated items for children.
Formal assessments are designed to measure specific abilities (e.g., language skills) and are typically administered individually. The development process involves careful planning, extensive research, and psychometric measures. For assessments targeting children, test scores must be sensitive to developmental changes (Roid, 2006). According to Dollaghan (2008), assessment in speech–language pathology is a classification for screening, diagnosis, and differential diagnosis purposes, and differs from the assessments of other abilities (e.g., communication) or those for planning and monitoring interventions due to differences in research evidence. The former focuses on the classification accuracy of tools, while the latter examines their impact on treatment outcomes. The present scoping review will include all types of assessments at any stage of the research process.
This review is significant because animation is increasingly used in child speech and language assessment tasks, although with little guidance on when and why it is effective or how animated stimuli are designed and reported. Mapping this evidence can inform assessment development and highlight gaps that affect score interpretation and clinical application.
As a scoping review, this study intentionally adopted a broad scope that reflects how speech, language, and communication are assessed as interconnected domains, including work situated in AAC.

2. Materials and Methods

2.1. Research Method

A scoping review is considered an appropriate method to investigate the use of animation in speech, language, and communication assessment methods because (a) it addresses a current trend driven by advancing technology and evolving assessment delivery methods (e.g., Krok et al., 2022), (b) the development of evidence-based evaluations involves multiple research stages (Downing, 2006), which is reflected in the diverse research found in the literature, and (c) assessments are designed to evaluate various abilities (Roid, 2006), with research samples and target populations varying accordingly.
A preliminary search conducted in July 2024 in the Cochrane Database of Systematic Reviews, the SpeechBITE database, and the Campbell Systematic Reviews journal did not identify any relevant systematic or scoping reviews. A few reviews in the field of speech–language pathology have examined the effect of animation in interventions for children and adults. Schlosser et al. (2022) conducted a scoping review on the role of animation in AAC for typically developing individuals and those with developmental disabilities. Vlachou et al. (2024) performed a systematic review regarding the effect of animation versus static pictures on language development in TD children. Frick et al. (2022) reviewed research on animation in AAC, suggesting future directions. Therefore, previous reviews have shown that animation can support communication and language research in speech–language pathology, particularly in AAC contexts. The present scoping review builds on this work by focusing specifically on animation used for child assessment, an area that has not yet been addressed in earlier reviews.
Given the considerations above, the main research question is formulated as follows:
“What studies have utilized animations to assess children with speech, language, or communication deficits and/or disorders?”
Consistent with scoping review methodology, the main research question was intentionally broad to support comprehensive mapping of the literature. The following sub-questions structured the data charting and synthesis:
(a)
What theoretical or research foundations supported the use of animation?
(b)
What was the purpose of using animation?
(c)
In what form were animations utilized?
(d)
What skills were the tools or tasks designed to assess?
(e)
Which clinical populations were they developed for?
(f)
What were the technical features of the animations?
This scoping review is based on the Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for scoping reviews protocol (PRISMA-ScR) (Tricco et al., 2018). The relative checklist is available completed in Appendix A. This scoping review was not pre-registered.

2.2. Eligibility Criteria

The selection criteria for this scoping review followed the Population, Concept, and Context (PCC) framework, focusing on studies involving children aged 0 to 18 years (from infancy to adolescence), regardless of typical or atypical development, race, gender, language, or socioeconomic status. Chronological age was chosen for this range because, in clinical populations, it does not always align with mental age or verbal age. Studies involving adult samples were not excluded if adults were part of the research design. The concept of interest was the use of animation (in any form, excluding multimedia APPs) in an assessment game, task, or test for children. The context included both informal (e.g., tasks) and formal assessments (e.g., tests) at any stage of the research process (e.g., experimental, piloting, standardization), aimed at evaluating speech, language, communication, or related cognitive areas. No restrictions were placed on the location (e.g., home, clinic, school) or method of administration (e.g., face-to-face or remote video platforms).
Studies on Theory of Mind (ToM), first and second language instruction, television animation viewing, eye-tracking, virtual reality, and avatar-based research were excluded, as were those that used animations without a clear assessment purpose. If multiple articles discussed the same assessment tool, the one with the most detailed information on animation was included. However, if the same test was administered on different participant groups, all relevant studies were included to capture responses to animated items across groups.
The present scoping review focuses on quantitative studies, given the nature of assessment research. It includes experimental and quasi-experimental studies, developmental studies, and studies examining psychometric qualities and standardization processes. Published papers, conference papers, and theses were included to avoid publication bias. Only studies written in English (for accessibility) and published since 2000 (when the technology began gradually to be used in evaluations) were included. Qualitative research, opinion reports, and gray literature were excluded due to their differing types of research evidence. The complete inclusion and exclusion criteria are presented in Table 1.

2.3. Search Strategy

A literature search was conducted from August to September 2024 (last search on 10 September) based on the eligibility criteria outlined above. Scopus, PubMed, ScienceDirect, and ProQuest databases were searched for studies between 2000 and 2024.
The authors agreed to apply the following search terms across all databases: “animation” AND “children” AND (“assessment” OR “evaluation” OR “testing”) AND (“autism” OR “intellectual disabilities” OR “language disorders” OR “communication disorders” OR “speech disorders”). The “all fields” tool was used, as an initial search showed that relevant papers did not include “animation” in their abstracts and keywords. An example of the search strategy for the PubMed database is presented in Table 2.
To ensure consistency, the authors developed an inclusion criteria form applicable to all potentially eligible studies, similar to Table 1. Using this form, the first author conducted the process of selecting the sources of evidence (screening and eligibility). The eligible abstracts were independently assessed by the last author to verify their suitability for the scoping review. In the next step, the full texts of eligible papers were retrieved and reviewed to determine which met the predetermined criteria. An additional search of the references was conducted to identify further potential studies. No other related systematic or scoping reviews were found. The final set of included studies was reviewed independently by the second author. Any disagreements were resolved through discussion among the authors. The study selection process is illustrated in the PRISMA flow diagram (Page et al., 2021) (Figure 1), which presents the number of records identified, screened, excluded, and included at each stage of the review.

2.4. Selection of Evidence Sources

During the identification process, four databases (PubMed, Scopus, ScienceDirect, ProQuest) were searched, yielding 3980 sources. After applying filters (e.g., time, language, scientific field) and removing duplicates, 1035 citations remained for screening. Based on the titles, 995 studies were excluded. The abstracts of the remaining 40 papers were then reviewed, leading to the exclusion of 20 more studies. Ultimately, 20 sources were retrieved for full eligibility assessment. At this stage, 7 additional studies were excluded (1 involved adults, 1 was a qualitative study, 1 involved intervention, and 4 were related to other scientific fields), leaving 13 studies eligible for the scoping review. An additional 5 more studies were found through citation tracking, bringing the final total to 18. The process was supported by Mendeley Reference Manager v2.80.1.

2.5. Data Charting

The included studies were organized chronologically, and data were extracted on the following variables: (a) the name or description of the assessment task or tool, (b) study design, (c) research objectives, (d) participants (sample size and diagnoses), (e) area of assessment (e.g., speech, language), (f) the target population for which the task or tool was developed, (g) purpose of introducing animation, and (h) study results. The data is summarized in Table 3.
Additionally, data on the following animation features were collected: (a) whether animations were original or commercially available, (b) technical descriptions, (c) frame speed, (d) duration, (e) availability of animation examples, (f) auditory input, and (g) theoretical or research basis for animation development and use. The data is presented in Table 4. Terminology and descriptions were used as reported by the authors.

3. Results

In line with scoping review methodology, this section describes the included studies and highlights key characteristics and reporting gaps. Broader patterns across studies, including comparisons between animated and static materials where relevant, are presented in Section 4.

3.1. Overview of Included Studies

The included studies (n = 18) provided data on various types of animations used in the development of assessment tasks and reflected an increasing trend in the use of animation after 2010. The research teams comprised SLPs, psychologists with diverse expertise (e.g., cognitive science, neuroscience, developmental psychology), computer engineers, and other scientists. These studies were conducted in several countries, including the United States, Republic of Ireland, Australia, the United Kingdom, South Africa, Finland, and Slovakia, in their respective languages.
The research samples included 2416 TD children, 362 of whom were dual-language learners, 152 children with DLD (or late talkers), 9 children with ASD, 33 children with Down syndrome, 32 children with cognitive impairments, and 98 children at risk for dyslexia. Participants ranged in age from 1 to 17.6 years. Three studies included adult participants (n = 118) as part of the research design. To our knowledge, there are no other studies that meet our selection criteria.

3.2. Research Aims and Animations

Gazella and Stockman (2003) discussed the potential for developing a standardized story-retelling task to assess grammar and vocabulary skills. They created a story presented through video-based puppet animation and compared it with an audio-only version. The study involved 29 TD children aged 4; 2 to 5; 6. No significant differences were found between the audio-only and audiovisual groups regarding the quantity of talking, lexical and syntactic features, or responses to direct questions. The authors referenced literature on video-based story presentations and employed this method to provide a more accurate depiction of events and human characteristics, aiming to enhance comprehension, particularly for children with language difficulties. A professional designed the puppets, and four individuals provided voice input. The puppets were manually manipulated, and the soundless video was edited for smooth transitions before being synchronized with recorded voices. Speech and actions did not overlap to avoid distraction. No details on duration, frame speed, or visual examples were provided.
Puolakanaho et al. (2004) developed the Heps-Kups Land, a computer-animation-based assessment tool, for language development and phonological awareness skills, using animations to motivate and engage children. The effectiveness of animated tasks had been previously demonstrated by Puolakanaho et al. (2003). The authors compared the performance of 98 children with a familial risk of dyslexia with 91 TD children, all aged 3.5 years. The results indicated that (a) the control group outperformed the at-risk group in phonological awareness; (b) phonological awareness can be assessed well before reading acquisition; and (c) the tasks could be further improved psychometrically. The tool was designed as a spontaneous speech game using animations and activities. Various task types from the literature were adapted and integrated into a computer animation format. No information was provided regarding the duration of the animations, frame speed, or visual samples.
Bishop et al. (2009) developed the Animation Description Task to assess cerebral lateralization using functional transcranial Doppler ultrasonography (fTCD). Designed for both young children with neurological conditions and non-clinical research samples, the task utilized animations to engage children. The study aimed to compare the sensitivity of different fTCD paradigms and evaluate the novel task with a sample of 21 TD children (aged 4). Results indicated that the new task demonstrated validity and reliability comparable to existing methods. The task was developed based on prior pilot studies and involved thirty 12 s animated cartoon clips. The authors did not report the frame rate or specify the types of non-speech sounds used. Additionally, the animations are available upon request.
McGonigle-Chalmers et al. (2013) investigated whether low-functioning children with autism have syntactic awareness, despite limited language production. They developed the game The Eventaurs, a computerized learning task designed to motivate children and capture their attention during assessment. When children provided a syntactically correct response, an animation depicted the corresponding event, illustrating the sentence’s meaning. The study involved 9 low-functioning children with profound expressive language impairment and autism (ages 5 to 17; 6). The results suggested that (a) none of the children lacked syntactic awareness and (b) their elementary syntactic control in a non-speech domain was superior to that observed in their spoken language. The task utilized 3D-colored animations and images created with Macromedia Flash 5.3, with sample figures provided. No other technical information was offered. The study was based on previous research by McGonigle and Chalmers (2002).
Alt (2013) investigated visual fast mapping and visual working memory skills in children with specific language impairment (SLI). The study involved two computer-based visual fast mapping games featuring animated dinosaurs. A total of 25 children with SLI and 25 TD children, aged 7 to 8, participated, identifying specific visual features. Tasks’ performance was correlated with the formal language tests used for assessment. The findings suggested that children with SLI may have a domain-general deficit, although their visual impairments were milder than their verbal impairments. Adobe Flash software was used to design colored animated dinosaurs with distinct visual features. The game consisted of 10 short, silent animated scenes, each lasting 10 s, set against simple backgrounds, but no information on the frame rate was provided. The rationale for using animations was not reported.
Klop and Engelbrecht (2013) investigated the effects of two visual presentation modalities—a soundless animated video and a wordless picture book—on children’s narratives for assessment purposes. They applied a between-subjects design with a sample of 20 TD children aged 8; 5 to 9; 4 years. Both modalities produced similar narratives, leading the authors to conclude that animated stimuli are not superior to static pictures for eliciting narration. The animated video story simulated the corresponding color picture book, referencing relevant literature that compares animated and static presentations. Designed by a graphic designer, the animations featured character movements, facial expressions, and fading between images, all without distracting backgrounds. The animated story lasted 2 min, introducing one picture at a time, though the frame rate was not reported. The pictures of the story are available.
Polišenská and Kapalková (2014) developed the Nonword Repetition Task, a screening tool for assessing language skills and phonological processes. They emphasized that using consistent, recorded nonword (i.e., pseudo-word) stimuli minimizes research bias. Evidence was presented on the task’s validity, reliability, and environmental influences. A sample of 391 TD children aged 2 to 6 years participated, and results showed high compliance rates and strong reliability. The task simulated creating a necklace with colored beads, based on a short story aimed at enhancing compliance, participation, and engagement. Designed using PowerPoint animation effects, it lasted 1.33 min and is available online. The authors did not specify the frame rate or provide relevant literature supporting the use of animation.
The Monosyllable Imitation Test for Toddlers (Hodges et al., 2017) was developed to assess verbal imitation, language development, and speech production. The study examined the effects of stimulus characteristics on imitation accuracy, as well as compliance and diagnostic accuracy. A sample of 26 TD and 26 late-talking children (aged 25–35 months) participated. The findings indicated that nonword stimuli characteristics influenced imitation accuracy, while the test showed good compliance rates and reasonable diagnostic accuracy. The assessment featured two brief (2.24 min each) animated stories designed to create a pragmatically motivating and engaging context for imitation. The animations were created using the iPad applications PhotoPuppet HD and iMovie, though the authors did not provide animation examples, report the frame rate, or offer any literature background for using animation.
Badcock et al. (2018) developed the What Box task for language lateralization assessment using functional transcranial Doppler ultrasonography (fTCD). This task was designed to elicit active language production (covert or overt) in populations where following complex instructions is challenging (e.g., toddlers). A sample of 95 TD children (aged 1–5 years) participated. Results showed the task’s effectiveness with young children and medium correspondence with the Word Generation task. The task features an animated face “searching” for an object, which is revealed when a box opens, prompting children to label it. The total task duration was 34 s, described through a detailed schematic diagram. The animations were created from a series of still-frame images. The authors based their research on prior literature on fTCD methods, both with and without animated stimuli.
Frizelle et al. (2018a) compared two methods for assessing complex syntax comprehension: a sentence-verification animation task and a multiple-choice picture-selection task. A sample of 103 TD children (aged 3; 6 to 4; 11) participated, showing better performance on the animated sentence-verification task. The study also found that each method revealed a different hierarchy of syntactic constructions, with some constructions more influenced by the testing methods. This study was grounded in the literature on truth-value-judgment sentence-verification assessments. The authors argued that the animated items provided a pragmatically appropriate context, reflected language processing in natural discourse, and supported real-time processing while reducing memory demands and minimizing picture interpretation. Each animation averaged 6 s, depicting various syntactic constructions, with relative videos available online. The animation software and frame rate were not specified.
Frizelle et al. (2018b), building on the previous study, developed the non-standardized Test of Complex Syntax–Electronic, focusing on children with Down syndrome (DS). The internal consistency of the test was evaluated. The study included 33 children with DS, 32 with cognitive impairments, and 33 TD children. Results showed that children with DS performed poorly on most sentences compared to both control groups. The authors suggested DS status accounted for significant variance beyond memory skills. This study was included as it provides evidence for the use of animation in assessing atypical children.
Diehm et al. (2020) investigated the effect of story presentation format (static picture book vs. animated video) on narrative retells by preschool children for language assessment purposes. The study involved 73 TD children (aged 3 to 5). Results indicated that children performed significantly better with the animated stories in both the quantity and quality of language. The authors used online animated videos and developed four narrative scripts to create both animated and static stimuli. The stories lasted between 2.15 and 2.35 min. No technical details or visual examples of the animations were provided. The study referenced prior literature on the impact of multimedia features in intervention and assessment.
Petit et al. (2020b) assessed the reliability of neural signals in response to semantic violations to develop a language comprehension test for hard-to-assess populations, such as those with ASD. The authors designed congruent and incongruent animations paired with spoken sentences to measure responses using electroencephalography (EEG). Focusing on lexical–semantic processing, they argued that animated stimuli would provide strong semantic contexts and enhance children’s engagement. The study included 20 TD children aged 9 to 12 years. Results indicated heterogeneous neural responses, suggesting the need for individual subject analyses. The animated items were colorful cartoons created using Adobe Photoshop CC 2017, with a minimum duration of 3 s. The authors provided videos and the frame rate in a figure. This study builds on previous research by Petit et al. (2020a), incorporating animations as a new feature.
Krok et al. (2022) presented the Sentence Diversity Priming Task, a structured elicitation protocol that assesses sentence diversity in toddlers via video platforms with parental involvement. They emphasized the importance of such an assessment and presented a preliminary analysis of compliance and developmental associations. The study included 32 TD toddlers aged 30 to 35 months. Findings suggest that the task is engaging, holds toddlers’ attention, reveals robust individual differences, correlates positively with parent-reported language measures, and has potential for tracking children’s language development over time. The task simulates a colorful animated picture book with familiar actions, creating an enjoyable context for parent–child interaction. Designed with minimal background distractions, it was developed using online animation tools and presented in Microsoft PowerPoint. Each animation lasted 10 s, with figures provided, though the frame rate was not reported. The task was based on a prior study by Krok and Leonard (2018).
Lloyd-Esenkaya et al. (2024) developed the Zoti’s Social Toolkit to assess emotion inferencing and conflict resolution knowledge by connecting language capabilities with social skills of children with DLD. This tool minimizes reliance on language skills, enabling task completion despite language impairments. The project involved four studies with 116 TD children, 22 children with DLD aged 7 to 11, and 20 TD adults. Results indicated that the tool is suitable for children with and without language difficulties. The tool utilizes animated scenarios featuring colorful animations in which the characters lack facial features or expressions. The animations were created using a stop-motion technique, combining several digitally drawn image frames. Each animation lasts approximately 11 s; however, the frame rate was not reported. Figures and videos are available. This test was based on the study by Ford and Milosky (2008). This study was included in the review, even though it also includes qualitative research.
The following three studies examined the original Quick Interactive Language Screener (QUILS) (Golinkoff et al., 2017) and two recent versions: the Quick Interactive Language Screener: English–Spanish (QUILS: ES) (Iglesias et al., 2021) and the Quick Interactive Language Screener: Toddlers (QUILS: TOD) (Jackson et al., 2023). These tests, developed for different populations, assess language comprehension (vocabulary, syntax) and language processing using the same structure and animated items. While it is unclear if each version uses all the same or different animations, they are included in this review for providing evidence on how large research samples respond to animated stimuli.
De Villiers et al. (2021) discussed the theoretical and methodological aspects of developing QUILS: ES, describing its development, validation, and norming. The study, involving 362 bilingual children aged 3 to 5; 11, found that QUILS: ES offers fair testing for dual-language learners. The authors argued that animated items depict event sequences more accurately than static pictures, and provided figures of them, but gave no details about the animations’ construction or the research background for using them.
Pace et al. (2022) examined the classification accuracy of the original QUILS in two studies involving a total of 193 children (79 with DLD, 114 TD; aged 3:0 to 6:9). Animations presented various syntactic structures, with examples of static pictures provided. However, further details on the animations’ construction and supporting literature were not included. The findings support the clinical use of QUILS for identifying DLD.
Jackson et al. (2023) described the development of QUILS: TOD, a measure for two-year-olds. Over four years, the project involved three phases with a sample of 874 TD children. Results indicated that QUILS: TOD could detect language deficits in toddlers and could be used for research purposes. The test, presented as a game, uses animated items to depict actions and events, reducing the inference demands of static images. While the authors provided relevant example static pictures of animations, they did not offer technical details. However, they presented a strong theoretical and research foundation on the use of animation, developmental milestones, and modes of introduction.

4. Discussion

This scoping review explored the use of animations in assessing speech, language, and communication in children, incorporating findings from 18 studies. The data collected (Table 3 and Table 4) provided a foundation for addressing the following research questions.
(a)
What theoretical or research foundations supported the use of animation?
Even though no study reported a clear theoretical basis for animation perception, the research foundation was stronger in several studies. Specifically, Klop and Engelbrecht (2013) provided a solid literature background on animation research, while Jackson et al. (2023) offered a well-supported research and theoretical foundation for each stage of test development. Bishop et al. (2009) relied on pilot study observations, and Badcock et al. (2018) built on prior language lateralization research. McGonigle-Chalmers et al. (2013), Petit et al. (2020b), and Krok et al. (2022) referenced their own prior research. Lloyd-Esenkaya et al. (2024) drew inspiration from Ford and Milosky (2008), while Puolakanaho et al. (2004) used task types from the existing literature.
Other approaches included video story presentations (Gazella & Stockman, 2003), multimedia features (Diehm et al., 2020), and truth-value-judgment sentence-verification assessments (Frizelle et al., 2018a). Alt (2013), Polišenská and Kapalková (2014), and Hodges et al. (2017) lacked animation-based references. Similarly, De Villiers et al. (2021) and Pace et al. (2022) did not include a research background on animation in their studies based on the original QUILS test.
  • (b) What was the purpose of using animation?
Many researchers reported using animation to motivate and engage children to carry out the required tasks (e.g., Puolakanaho et al., 2004; Bishop et al., 2009). Others used animations for more accurate visual representation of the items (Gazella & Stockman, 2003; De Villiers et al., 2021; Pace et al., 2022; Jackson et al., 2023). Hodges et al. (2017) argued that animations provided a pragmatically motivating context for imitation, but they also used animation to increase compliance, as did Polišenská and Kapalková (2014). Frizelle et al. (2018a, 2018b) emphasized the role of animations in supporting pragmatic contexts that allow for language processing and reduced memory demands. Petit et al. (2020b) highlighted the representation of both strong semantic contexts and engagement. Krok et al. (2022) used animations to simulate home activities and depict familiar actions for toddlers.
The purpose of the studies by Klop and Engelbrecht (2013), Frizelle et al. (2018a), and Diehm et al. (2020) was comparison of animations with static pictures, with the latter reporting that animations prompted children for narration. Gazella and Stockman (2003) compared animation as an audiovisual mode with an audio-only mode. Badcock et al. (2018) used animated stimuli to elicit covert or overt language production. Animations were also used in all versions of the QUILS tests to depict specific items for language comprehension and processing (De Villiers et al., 2021; Pace et al., 2022; Jackson et al., 2023), while Alt (2013) utilized animations for visual fast mapping exercises. McGonigle-Chalmers et al. (2013) employed animation to depict correct responses. Finally, Lloyd-Esenkaya et al. (2024) used animations to support children with DLD in test completion by reducing reliance on their receptive and expressive language skills.
  • (c) In what form were animations utilized?
Animations were used in several ways: (a) as features in assessment games (Puolakanaho et al., 2004; McGonigle-Chalmers et al., 2013; Jackson et al., 2023); (b) as items in tests, primarily to depict actions and events (De Villiers et al., 2021; Pace et al., 2022; Jackson et al., 2023); (c) as tasks in experimental studies (Alt, 2013; Frizelle et al., 2018a); (d) as stimuli in neuropsychological studies (Bishop et al., 2009; Badcock et al., 2018; Petit et al., 2020b); (e) as simulations of books (Klop & Engelbrecht, 2013; Krok et al., 2022) or a necklace (Polišenská & Kapalková, 2014); and (f) as short animated stories or scenarios (Gazella & Stockman, 2003; Klop & Engelbrecht, 2013; Hodges et al., 2017; Diehm et al., 2020; Lloyd-Esenkaya et al., 2024). In some cases, animated GIFs were used to provide feedback to children for encouragement (e.g., De Villiers et al., 2021; Jackson et al., 2023).
  • (d) What skills were the tools or tasks designed to assess?
All assessment tasks or tools were developed to evaluate aspects of receptive or expressive language from the single-word level to discourse measures. These include phonology, vocabulary, syntax, sentence diversity, semantics, visual fast mapping, language processing, narrative skills, language lateralization, and social skills in relation to language skills (Table 3).
  • (e) Which clinical populations were they developed for?
Children with, or at risk for, language disorders were the primary population of interest for researchers, with a trend for screening tools and early assessment measures (Gazella & Stockman, 2003; Alt, 2013; Klop & Engelbrecht, 2013; Polišenská & Kapalková, 2014; Hodges et al., 2017; Frizelle et al., 2018a; Diehm et al., 2020; De Villiers et al., 2021; Krok et al., 2022; Pace et al., 2022; Jackson et al., 2023; Lloyd-Esenkaya et al., 2024). Two studies focused on developing assessments for ASD (McGonigle-Chalmers et al., 2013; Petit et al., 2020b), one was on Down syndrome and cognitive impairment (Frizelle et al., 2018b), one was on dyslexia (Puolakanaho et al., 2004), and two were on children with neurological conditions for language lateralization (Bishop et al., 2009; Badcock et al., 2018).
  • (f) What were the technical features of the animations?
All animations, except those in Diehm et al. (2020), were original creations for specific research purposes and varied in technical construction and duration (see Table 4). Information on frame rates and levels of realism was limited. Only two studies reported frame rates (Badcock et al., 2018; Petit et al., 2020b), while many did not specify the duration of the animated items or tasks (Gazella & Stockman, 2003; Puolakanaho et al., 2004; McGonigle-Chalmers et al., 2013; De Villiers et al., 2021; Pace et al., 2022; Jackson et al., 2023). Several studies did not provide animation examples (Gazella & Stockman, 2003; Puolakanaho et al., 2004; Hodges et al., 2017; Diehm et al., 2020; Bishop et al., 2009). Moreover, all but three studies incorporated auditory input for instructions or feedback (Alt, 2013; Klop & Engelbrecht, 2013; Krok et al., 2022). One study used sounds but not speech in the animation itself (Bishop et al., 2009).

4.1. Evidence Base for Animation in Speech–Language Pathology Assessment

This section synthesizes findings across studies in relation to the review sub-questions, highlighting recurring patterns, limitations, and gaps in the evidence base.

4.1.1. Research Approaches

Various research methods were applied, each contributing to our understanding of the use of animation in assessment in this review.
(a)
Experimental studies that directly compared animated and static representations reported mixed findings, with some showing comparable performance across modalities (Klop & Engelbrecht, 2013) and others demonstrating improved performance with animation for specific language tasks (Frizelle et al., 2018a; Diehm et al., 2020).
(b)
Studies that supported the use of animation based on prior literature or experimental research (Puolakanaho et al., 2004; Bishop et al., 2009; McGonigle-Chalmers et al., 2013; Badcock et al., 2018; Krok et al., 2022; Jackson et al., 2023; Lloyd-Esenkaya et al., 2024).
(c)
Studies that, while using animation to meet specific research objectives, provided valuable insights into responses/behaviors of children on animation (Gazella & Stockman, 2003; Alt, 2013; Polišenská & Kapalková, 2014; Petit et al., 2020b; Hodges et al., 2017).
(d)
Standardization studies of language assessment measures that used animated items with large research samples (De Villiers et al., 2021; Jackson et al., 2023).
(e)
Studies applying animated tasks in children with language disorders, providing evidence from atypical language development (Pace et al., 2022; Frizelle et al., 2018b; Lloyd-Esenkaya et al., 2024).
Importantly, most studies do not report an equivalent static comparison condition (Tversky et al., 2002), as animation was built into the assessment design. This limits conclusions about whether animation offers advantages over static representations.

4.1.2. Animation and Language

It is not surprising that all studies in this review used animation for the assessment of various receptive and expressive language skills. The ability of animation to convey temporal information (Schnotz & Lowe, 2008) appears to positively impact the comprehension of linguistic elements involving time, such as actions, event sequences, specific prepositions and complex syntactic structures, and, finally, narratives. This feature helps children avoid misinterpretations that may occur in assessments with static pictures (Frizelle et al., 2018a; De Villiers et al., 2021; Jackson et al., 2023; Lloyd-Esenkaya et al., 2024). The temporal nature of animation also seems to support the pragmatic and semantic context of linguistic representations by directly illustrating events in a context and helping children form and process semantic representations (Frizelle et al., 2018a; Hodges et al., 2017; Petit et al., 2020b). The assessment of expressive language abilities was similarly supported by various animated tasks (Puolakanaho et al., 2004; Polišenská & Kapalková, 2014; Diehm et al., 2020; Hodges et al., 2017; Krok et al., 2022). This evidence aligns with the findings of Vlachou et al. (2024) systematic review, which also reported a positive effect of animation on the receptive and expressive language of TD children.
Furthermore, De Villiers et al. (2021) demonstrated that animated items can activate the target-language skills/responses in both English and Spanish of dual learners. Finally, Lloyd-Esenkaya et al. (2024) focused on the relationship between language skills and social understanding and designed their animated scenarios to be both cross-cultural and cross-linguistic.
Beyond the observed behaviors of children in response to animated tasks, neuropsychological studies have expanded our understanding of the brain’s response to animated stimuli, providing evidence of language lateralization, even in toddlers (Bishop et al., 2009; Badcock et al., 2018). Similarly, Petit et al. (2020b), using electroencephalography (EEG), provided evidence for measuring neural response to semantic violations depicted through congruent and incongruent animations.

4.1.3. Motivation, Engagement, and Participation

Most authors reported that animations were used to enhance motivation, engagement, or participation, with no opposing findings. However, these claims would be more robust if grounded in explicit theoretical or empirical accounts explaining how and why animation may influence assessment behavior relative to static representations. In studies involving toddlers, animations were employed to increase compliance rates, and the results supported this approach (Polišenská & Kapalková, 2014; Hodges et al., 2017), even in distance assessment administration (Krok et al., 2022).

4.1.4. Data from Clinical Samples

Positive evidence of responses to animated tasks in children with or at risk for DLD comes from studies by Alt (2013), Hodges et al. (2017), Pace et al. (2022), and Lloyd-Esenkaya et al. (2024), despite the heterogeneity in research samples (e.g., age, language skills), differing research objectives, and the variety of animated tasks used. Additionally, animations depicting the correct syntactic responses of nine low-functioning children with ASD had a positive effect, which was attracting their attention and engaging them in the game (McGonigle-Chalmers et al., 2013).
Finally, differences in performance between children with Down syndrome and those with cognitive impairments in the study by Frizelle et al. (2018b) may deserve further investigation by first considering their motion perception abilities related to the animated items (Jagaroo & Wilkinson, 2008).

4.1.5. Animation and Cognition

The impact of animation on cognition during children’s assessments requires careful consideration, particularly regarding its effects on working memory and attention (Tversky et al., 2002).
Memory aspects: (a) Most animations had brief exposure times; (b) Alt (2013) reported no negative effects on visual working memory or fast mapping in TD children or those with SLI; (c) Frizelle et al. (2018a, 2018b) argued that animations reduced memory demands; (d) Available animations provide examples of speed rate, though insufficient data exists for best practices; (e) Gazella and Stockman (2003) and Klop and Engelbrecht (2013) used smooth transitions but did not discuss their cognitive impact; (f) Petit et al. (2020b) noted that fading effects minimized sudden onsets and reduced eye movements.
Attention and distraction: (a) Animation’s ability to attract attention was noted in some studies (McGonigle-Chalmers et al., 2013; Krok et al., 2022), likely explained by the Stimulus Movement Effect (Nealis et al., 1977); (b) Simple backgrounds were used to avoid distraction (Alt, 2013; Klop & Engelbrecht, 2013), with Krok et al. (2022) providing specific examples; (c) Petit et al. (2020b) discussed concerns that animations might divert participants’ attention from listening to sentences; (d) The impact of realism level was not investigated.
Visual and auditory stimuli: (a) The animated videos in Gazella and Stockman (2003) avoided overlapping speech and actions to prevent distraction; (b) Bishop et al. (2009) utilized non-speech sounds to stimulate language lateralization.
Finally, De Villiers et al. (2021), Pace et al. (2022), and Jackson et al. (2023) introduced animated items to assess language learning process, while McGonigle-Chalmers et al. (2013) used animations to represent syntactically correct sentences that children created with pictures.

4.1.6. Psychometric Properties

Formal tasks or tests must demonstrate validity, reliability, and diagnostic accuracy (Dollaghan, 2008; Glascoe & Cairney, 2018). Several studies support some or all of these psychometric properties, as well as fair testing, showing that animated items can effectively contribute to assessment (Puolakanaho et al., 2004; Bishop et al., 2009; Polišenská & Kapalková, 2014; Hodges et al., 2017; Badcock et al., 2018; Frizelle et al., 2018b; De Villiers et al., 2021; Krok et al., 2022; Pace et al., 2022; Jackson et al., 2023; Lloyd-Esenkaya et al., 2024).

4.2. Limitations and Gaps of Knowledge

The primary limitation of this review is the scarcity of studies that directly investigate animation as the main objective for assessment purposes. Most studies used animations to develop tasks or items for specific research purposes, without examining animation as a variable. Additionally, the use of varying animation styles (e.g., levels of realism, design methods) across studies hinders comparisons between research groups, making it difficult to draw reliable conclusions about best practices for using animation in assessment of children. Furthermore, limited information on the technical features of animations, as also noted by Schlosser et al. (2022), creates a significant knowledge gap. Additionally, a key limitation identified during data charting is the lack of reporting on research design in most studies, leading to cautious interpretations.

4.3. Future Research

Future research could expand on the areas identified (Section 4.2) to strengthen the evidence for using animation in children’s speech–language pathology assessments. More studies comparing animation with static images in an equivalent manner are needed to establish a solid foundation answering the fundamental question of “why animation is better from static picture” in clinical practice. Research should also explore how animated tasks affect children, considering their developmental milestones, clinical profiles, and visual perception skills. Additionally, more evidence is needed on the cognitive impact of animation during assessment. Finally, the fields of speech and communication present further opportunities for exploring animation in evaluation.

5. Conclusions

This scoping review examined the use of animation in assessing children with speech, language, and communication impairments since 2000, addressing specific research questions. Although the data synthesis revealed no studies with a theoretical foundation on children’s perception and processing of animation, several studies provided research evidence supporting its application for assessment purposes. Animation was introduced for diverse purposes, with varying technical backgrounds, to meet different research goals. Research primarily involved TD children and fewer clinical samples with DLD, ASD, Down syndrome, cognitive impairments, and children at risk for dyslexia, focusing on the assessment of various language aspects.
The variety of animation features and presentation modes precludes conclusions regarding best practices in animation use for assessment in speech–language pathology. Consequently, this review contributes to the establishment of an initial evidence base, documenting research approaches, the correlation between animation and language/cognition, observed behaviors, clinical sample performance, and the psychometric properties of assessment tools. While some areas are well supported, others require further investigation. Importantly, however, there is no evidence to suggest that the above animation tasks pose any harm to children during assessment.
In conclusion, animation, as a modern technological tool, may have the potential to enhance traditional assessment methods and better support the needs of specific populations if research evidence substantiates its use.

Author Contributions

Conceptualization, T.I.V. and V.C.G.; formal analysis, T.I.V. and V.C.G.; resources, T.I.V., V.C.G., M.K. and A.T.; data curation, T.I.V. and V.C.G.; writing—original draft preparation, T.I.V., M.K., A.T., and V.C.G.; writing—review and editing T.I.V., M.K., A.T., and V.C.G.; visualization, T.I.V. and V.C.G.; supervision, V.C.G. and M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Research Council of the University of Patras for article processing charges (APC) only. This research received no additional funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created.

Acknowledgments

The publication fees of this manuscript have been financed by the Research Council of the University of Patras.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SLPSpeech–language pathologist
AACAugmentative and Alternative Communication
TDTypically developing
ASDAutism spectrum disorder
PRISMA-ScRPreferred Reporting Items for Systematic Reviews and Meta-analyses
PPCPopulation, Concept, and Context
ToMTheory of Mind
fTCDFunctional transcranial Doppler ultrasonography
SLISpecific language impairment
DSDown syndrome
DLDDevelopmental language disorder
QUILSQuick Interactive Language Screener
QUILS: ESQuick Interactive Language Screener: English–Spanish
QUILS: TODQuick Interactive Language Screener: Toddlers
EEGElectroencephalography
N/ANot applicable

Appendix A

The completed checklist based on the Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for scoping reviews protocol (PRISMA-ScR) is presented in Table A1 (Tricco et al., 2018).
Table A1. PRISMA-ScR checklist.
Table A1. PRISMA-ScR checklist.
SectionItemPRISMA-ScR Checklist ItemReported on Page
TITLE
 Title1Identify the report as a scoping review.1
ABSTRACT
 Structured summary2Provide a structured summary that includes (as applicable): background, objectives, eligibility criteria, sources of evidence, charting methods, results, and conclusions that relate to the review questions and objectives.1
INTRODUCTION
 Rationale3Describe the rationale for the review in the context of what is already known. Explain why the review questions/objectives lend themselves to a scoping review approach. 1, 2, 3
METHODS
 Objectives4Provide an explicit statement of the questions and objectives being addressed with reference to their key elements (e.g., population or participants, concepts, and context) or other relevant key elements used to conceptualize the review questions and/or objectives.4
 Protocol and registration5Indicate whether a review protocol exists; state if and where it can be accessed (e.g., a Web address); and if available, provide registration information, including the registration number.N/A
 Eligibility criteria6Specify characteristics of the sources of evidence used as eligibility criteria (e.g., years considered, language, and publication status) and provide a rationale.5 and Table 1
 Information sources7Describe all information sources in the search (e.g., databases with dates of coverage and contact with authors to identify additional sources), as well as the date the most recent search was executed.5
 Search8Present the full electronic search strategy for at least 1 database, including any limits used, such that it could be repeated.5 and Table 2
 Selection of sources of evidence process9State the process for selecting sources of evidence (i.e., screening and eligibility) included in the scoping review.5 and Table 1
 Selection of sources of evidence10Give numbers of sources of evidence screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally using a flow diagram.5, 6, and
Figure 1
 Data charting process11Describe the methods of charting data from the included sources of evidence (e.g., calibrated forms or forms that have been tested by the team before their use, and whether data charting was performed independently or in duplicate) and any processes for obtaining and confirming data from investigators.7
 Data items12List and define all variables for which data were sought and any assumptions and simplifications made.7
 Critical appraisal of individual sources of evidence13If performed, provide a rationale for conducting a critical appraisal of included sources of evidence; describe the methods used and how this information was used in any data synthesis (if appropriate).N/A
 Synthesis of results14Describe the methods of handling and summarizing the data that were charted.Table 3 and Table 4
RESULTS
 Characteristics of sources of evidence15For each source of evidence, present characteristics for which data were charted and provide the citations.15 to 18
 Critical appraisal within sources of evidence16If performed, present data on critical appraisal of included sources of evidence (see item 12).N/A
 Results of individual sources of evidence17For each included source of evidence, present the relevant data that were charted that relate to the review questions and objectives.15 to 18
DISCUSSION
 Synthesis of results18Summarize and/or present the charting results as they relate to the review questions and objectives.19, 20
 Summary of evidence19Summarize the main results (including an overview of concepts, themes, and types of evidence available), link to the review questions and objectives, and consider the relevance to key groups.20 to 22
 Limitations20Discuss the limitations of the scoping review process.22
 Conclusions21Provide a general interpretation of the results with respect to the review questions and objectives, as well as potential implications and/or next steps.23
FUNDING
 Funding22Describe sources of funding for the included sources of evidence, as well as sources of funding for the scoping review. Describe the role of the funders of the scoping review.N/A
Note: The abbreviation N/A means non-applicable.

References

  1. Alt, M. (2013). Visual fast mapping in school-aged children with specific language impairment. Topics in Language Disorders, 33(4), 328–346. [Google Scholar] [CrossRef]
  2. Badcock, N. A., Spooner, R., Hofmann, J., Flitton, A., Elliott, S., Kurylowicz, L., Lavrencic, L. M., Payne, H. P., Holt, G. H., Holden, A., Churches, O. F., Kohler, M. J., & Keage, H. A. (2018). What box: A task for assessing language lateralization in young children. Laterality: Asymmetries of Body, Brain and Cognition, 23(4), 391–408. [Google Scholar] [CrossRef]
  3. Berney, S., & Bétrancourt, M. (2016). Does animation enhance learning? A meta-analysis. Computers & Education, 101, 150–167. [Google Scholar] [CrossRef]
  4. Betrancourt, M., & Tversky, B. (2000). Effect of computer animation on users’ performance: A review (Effet de l’animation sur les performances des utilisateurs: Une sythèse). Le Travail Humain, 63(4), 311–329. Available online: https://www.proquest.com/scholarly-journals/effect-computer-animation-on-users-performance/docview/1311633511/se-2 (accessed on 20 March 2023).
  5. Bishop, D. V., Watt, H., & Papadatou-Pastou, M. (2009). An efficient and reliable method for measuring cerebral lateralization during speech with functional transcranial Doppler ultrasound. Neuropsychologia, 47(2), 587–590. [Google Scholar] [CrossRef]
  6. Brock, K. L., Zolkoske, J., Cummings, A., & Ogiela, D. A. (2022). The effects of symbol format and psycholinguistic features on receptive syntax outcomes of children without disability. Journal of Speech, Language, and Hearing Research, 65(12), 4741–4760. [Google Scholar] [CrossRef]
  7. Carter, E. J., Williams, D. L., Hodgins, J. K., & Lehman, J. F. (2014). Are children with autism more responsive to animated characters? A study of interactions with humans and human-controlled avatars. Journal of Autism and Developmental Disorders, 44, 2475–2485. [Google Scholar] [CrossRef] [PubMed]
  8. Choe, N., Shane, H., Schlosser, R. W., Haynes, C. W., & Allen, A. (2022). Directive-following based on graphic symbol sentences involving an animated verb symbol: An exploratory study. Communication Disorders Quarterly, 43(3), 143–151. [Google Scholar] [CrossRef]
  9. De Villiers, J., Iglesias, A., Golinkoff, R., Hirsh-Pasek, K., Wilson, M. S., & Nandakumar, R. (2021). Assessing dual language learners of Spanish and English: Development of the QUILS: ES. Revista de Logopedia, Foniatría y Audiología, 41(4), 183–196. [Google Scholar] [CrossRef]
  10. Diehm, A. E., Wood, C., Puhlman, J., & Callendar, M. (2020). Young children’s narrative retell in response to static and animated stories. International Journal of Language & Communication Disorders, 55(3), 359–372. [Google Scholar] [CrossRef]
  11. Dollaghan, A. C. (2008). The handbook for evidence-based practice in communication disorders. Paul H. Brookers Publishing Co. [Google Scholar]
  12. Downing, S. M. (2006). Twelve steps for effective test development. In S. M. Downing, & T. M. Haladyna (Eds.), Handbook of test development (pp. 3–25). Lawrence Erlbaum Associates, Inc. [Google Scholar] [CrossRef]
  13. Folksman, D., Fergus, P., Al-Jumeily, D., & Carter, C. (2013, December 16–18). A mobile multimedia application inspired by a spaced repetition algorithm for assistance with speech and language therapy. 2013 Sixth International Conference on Developments in Esystems Engineering (pp. 367–375), Abu Dhabi, United Arab Emirates. [Google Scholar] [CrossRef]
  14. Ford, J. A., & Milosky, L. M. (2008). Inference generation during discourse and its relation to social competence: An online investigation of abilities of children with and without language impairment. Journal of Speech, Language, and Hearing Research, 51(2), 367–380. [Google Scholar] [CrossRef]
  15. Frick, B., Boster, J. B., & Thompson, S. (2022). Animation in AAC: Previous research, a sample of current availability in the United States, and future research potential. Assistive Technology, 35, 302–311. [Google Scholar] [CrossRef]
  16. Frizelle, P., Thompson, P., Duta, M., & Bishop, D. V. (2018a). Assessing children’s understanding of complex syntax: A comparison of two methods. Language Learning, 69(2), 255–291. [Google Scholar] [CrossRef]
  17. Frizelle, P., Thompson, P. A., Duta, M., & Bishop, D. V. (2018b). The understanding of complex syntax in children with down syndrome. Wellcome Open Research, 3, 140. [Google Scholar] [CrossRef]
  18. Fujisawa, K., Inoue, T., Yamana, Y., & Hayashi, H. (2011). The effect of animation on learning action symbols by individuals with intellectual disabilities. Augmentative and Alternative Communication, 27(1), 53–60. [Google Scholar] [CrossRef] [PubMed]
  19. Gazella, J., & Stockman, I. J. (2003). Children’s story retelling under different modality and task conditions. American Journal of Speech-Language Pathology, 12, 61–72. [Google Scholar] [CrossRef] [PubMed]
  20. Glascoe, F. P., & Cairney, J. (2018). Best practices in test construction for developmental-behavioral measures: Quality standards for reviewers and researchers. In H. Needelman, & B. J. Jackson (Eds.), Follow-up for NICU graduates: Promoting positive developmental and behavioral outcomes for at-risk infants (pp. 255–279). Springer. [Google Scholar] [CrossRef]
  21. Golinkoff, R. M., De Villiers, J. G., Hirsh-Pasek, K., Iglesias, A., Wilson, M. S., Morini, G., & Brezack, N. (2017). User’s manual for the quick interactive language screener (QUILS): A measure of vocabulary, syntax, and language acquisition skills in young children. Paul H. Brookes Publishing Company. [Google Scholar]
  22. Harmon, A. C., Schlosser, R. W., Gygi, B., Shane, H. C., Kong, Y. Y., Book, L., Macduff, K., & Hearn, E. (2014). Effects of environmental sounds on the guessability of animated graphic symbols. Augmentative and Alternative Communication, 30(4), 298–313. [Google Scholar] [CrossRef]
  23. Hetzroni, O. E., & Tannous, J. (2004). Effects of a computer-based intervention program on the communicative functions of children with autism. Journal of Autism and Developmental Disorders, 34, 95–113. [Google Scholar] [CrossRef]
  24. Hodges, R., Munro, N., Baker, E., McGregor, K., & Heard, R. (2017). The monosyllable imitation test for toddlers: Influence of stimulus characteristics on imitation, compliance and diagnostic accuracy. International Journal of Language & Communication Disorders, 52(1), 30–45. [Google Scholar] [CrossRef]
  25. Horvath, S., McDermott, E., Reilly, K., & Arunachalam, S. (2018). Acquisition of verb meaning from syntactic distribution in preschoolers with autism spectrum disorder. Language, Speech, and Hearing Services in Schools, 49(3S), 668–680. [Google Scholar] [CrossRef]
  26. Horwitz, L., McCarthy, J. W., Roth, M. A., & Marinellie, S. A. (2014). The effects of an animated exemplar/nonexemplar program to teach the relational concept on to children with autism spectrum disorders and developmental delays who require AAC. Contemporary Issues in Communication Science and Disorders, 41, 83–95. [Google Scholar] [CrossRef]
  27. Höffler, T. N., & Leutner, D. (2007). Instructional animation versus static pictures: A meta-analysis. Learning and Instruction, 17(6), 722–738. [Google Scholar] [CrossRef]
  28. Iglesias, A., De Villiers, J., Golinkoff, R. M., Hirsh-Pasek, K., & Wilson, M. S. (2021). User’s manual for the quick interactive language screener–ES™ (QUILS–ES™): A measure of vocabulary, syntax, and language acquisition skills in young bilingual children (Version ES). Brookes Publishing Co. [Google Scholar]
  29. Jackson, E., Levine, D., de Villiers, J., Iglesias, A., Hirsh-Pasek, K., & Michnick Golinkoff, R. (2023). Assessing the language of 2 year-olds: From theory to practice. Infancy, 28(5), 930–957. [Google Scholar] [CrossRef]
  30. Jagaroo, V., & Wilkinson, K. (2008). Further considerations of visual cognitive neuroscience in aided AAC: The potential role of motion perception systems in maximizing design display. Augmentative and Alternative Communication, 24(1), 29–42. [Google Scholar] [CrossRef] [PubMed]
  31. Klop, D., & Engelbrecht, L. (2013). The effect of two different visual presentation modalities on the narratives of mainstream grade 3 children. South African Journal of Communication Disorders, 60(1), 21–26. [Google Scholar] [CrossRef]
  32. Krok, W., & Leonard, L. B. (2018). Verb variability and morphosyntactic priming with typically developing 2-and 3-year-olds. Journal of Speech, Language, and Hearing Research, 61(12), 2996–3009. [Google Scholar] [CrossRef]
  33. Krok, W., Norton, E. S., Buchheit, M. K., Harriott, E. M., Wakschlag, L., & Hadley, P. A. (2022). Using animated action scenes to remotely assess sentence diversity in toddlers. Topics in Language Disorders, 42(2), 156–172. [Google Scholar] [CrossRef]
  34. Lee, H., & Hong, K. H. (2013). The effect of animations on the AAC symbol recognition of children with intellectual disabilities. Special Education Research, 12(2), 185–202. [Google Scholar] [CrossRef]
  35. Lloyd-Esenkaya, V., Russell, A. J., & St Clair, M. C. (2024). Zoti’s Social Toolkit: Developing and piloting novel animated tasks to assess emotional understanding and conflict resolution skills in childhood. British Journal of Developmental Psychology, 42(2), 187–214. [Google Scholar] [CrossRef] [PubMed]
  36. Lowe, R., & Boucheix, J.-M. (2017). A composition approach to design of educational animations. In R. Lowe, & R. Ploetzner (Eds.), Learning from dynamic visualization: Innovations in research and application (pp. 5–30). Springer. Available online: https://link.springer.com/book/10.1007/978-3-319-56204-9 (accessed on 12 October 2022).
  37. Massaro, D. W., & Bosseler, A. (2006). Read my lips: The importance of the face in a computer-animated tutor for vocabulary learning by children with autism. Autism, 10(5), 495–510. [Google Scholar] [CrossRef] [PubMed]
  38. Massaro, D. W., & Light, J. (2004). Improving the vocabulary of children with hearing loss. Work, 831, 459–2330. [Google Scholar]
  39. Mayer, R. E. (2001). Multimedia learning. Cambridge University Press. [Google Scholar]
  40. Mayer, R. E., & Moreno, R. (2002). Animation as an aid to multimedia learning. Educational Psychology Review, 14, 87–99. [Google Scholar] [CrossRef]
  41. McCarthy, J. W., & Boster, J. B. (2018). A comparison of the performance of 2.5 to 3.5-year-old children without disabilities using animated and cursor-based scanning in a contextual scene. Assistive Technology, 30(4), 183–190. [Google Scholar] [CrossRef]
  42. McGonigle, B., & Chalmers, M. (2002). A behavior-based fractionation of cognitive competence with clinical applications: A comparative approach. International Journal of Comparative Psychology, 15, 154–173. [Google Scholar] [CrossRef]
  43. McGonigle-Chalmers, M., Alderson-Day, B., Fleming, J., & Monsen, K. (2013). Profound expressive language impairment in low functioning children with autism: An investigation of syntactic awareness using a computerised learning task. Journal of Autism and Developmental Disorders, 43, 2062–2081. [Google Scholar] [CrossRef]
  44. Mineo, B. A., Peischl, D., & Pennington, C. (2008). Moving targets: The effect of animation on identification of action word representations. Augmentative and Alternative Communication, 24(2), 162–173. [Google Scholar] [CrossRef]
  45. Mulholland, R., Pete, A. M., & Popeson, J. (2008). Using animated language software with children diagnosed with autism spectrum disorders. Teaching Exceptional Children Plus, 4(6), 1–9. [Google Scholar]
  46. Munn, Z., Peters, M. D., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology, 18, 143. [Google Scholar] [CrossRef] [PubMed]
  47. Nealis, P. M., Harlow, H. F., & Suomi, S. J. (1977). The effects of stimulus movement on discrimination learning by rhesus monkeys. Bulletin of the Psychonomic Society, 10(3), 161–164. [Google Scholar] [CrossRef]
  48. Pace, A., Curran, M., Van Horne, A. O., de Villiers, J., Iglesias, A., Golinkoff, R. M., Wilson, M. S., & Hirsh-Pasek, K. (2022). Classification accuracy of the quick interactive language screener for preschool children with and without developmental language disorder. Journal of Communication Disorders, 100, 106276. [Google Scholar] [CrossRef] [PubMed]
  49. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. International Journal of Surgery, 88, 105906. [Google Scholar] [CrossRef]
  50. Petit, S., Badcock, N. A., Grootswagers, T., Rich, A. N., Brock, J., Nickels, L., Moerel, D., Dermody, N., Yau, S., Schmidt, E., & Woolgar, A. (2020a). Toward an individualized neural assessment of receptive language in children. Journal of Speech, Language, and Hearing Research, 63(7), 2361–2385. [Google Scholar] [CrossRef]
  51. Petit, S., Badcock, N. A., Grootswagers, T., & Woolgar, A. (2020b). Unconstrained multivariate EEG decoding can help detect lexical-semantic processing in individual children. Scientific Reports, 10(1), 10849. [Google Scholar] [CrossRef]
  52. Ploetzner, R., Berney, S., & Bétrancourt, M. (2020). A review of learning demands in instructional animations: The educational effectiveness of animations unfolds if the features of change need to be learned. Journal of Computer Assisted Learning, 36(6), 838–860. [Google Scholar] [CrossRef]
  53. Polišenská, K., & Kapalková, S. (2014). Improving child compliance on a computer-administered nonword repetition task. Journal of Speech, Language, and Hearing Research, 57(3), 1060–1068. [Google Scholar] [CrossRef]
  54. Puolakanaho, A., Poikkeus, A. M., Ahonen, T., Tolvanen, A., & Lyytinen, H. (2003). Assessment of three-and-a-half-year-old children’s emerging phonological awareness in a computer animation context. Journal of Learning Disabilities, 36(5), 416–423. [Google Scholar] [CrossRef] [PubMed]
  55. Puolakanaho, A., Poikkeus, A. M., Ahonen, T., Tolvanen, A., & Lyytinen, H. (2004). Emerging phonological awareness differentiates children with and without familial risk for dyslexia after controlling for general language skills. Annals of Dyslexia, 54, 221–243. [Google Scholar] [CrossRef] [PubMed]
  56. Roid, G. H. (2006). Designing ability tests. In S. M. Downing, & T. M. Haladyna (Eds.), Handbook of test development (pp. 527–542). Lawrence Erlbaum Associates, Inc. [Google Scholar] [CrossRef]
  57. Schlosser, R. W., Brock, K. L., Koul, R., Shane, H., & Flynn, S. (2019). Does animation facilitate understanding of graphic symbols representing verbs in children with autism spectrum disorder? Journal of Speech, Language, and Hearing Research, 62(4), 965–978. [Google Scholar] [CrossRef]
  58. Schlosser, R. W., Choe, N., Koul, R., Shane, H. C., Yu, C., & Wu, M. (2022). Roles of animation in augmentative and alternative communication: A scoping review. Current Developmental Disorders Reports, 9(4), 187–203. [Google Scholar] [CrossRef]
  59. Schlosser, R. W., Koul, R., Shane, H., Sorce, J., Brock, K., Harmon, A., Moerlein, D., & Hearn, E. (2014). Effects of animation on naming and identification across two graphic symbol sets representing verbs and prepositions. Journal of Speech, Language, and Hearing Research, 57(5), 1779–1791. [Google Scholar] [CrossRef]
  60. Schnotz, W., & Lowe, R. K. (2008). A unified view of learning from animated and static graphics. In R. K. Lowe, & W. Schnotz (Eds.), Learning with animation: Research implications for design (pp. 304–356). Cambridge University Press. Available online: https://www.researchgate.net/publication/303206444_A_Unified_View_of_Learning_from_Animated_and_Static_Graphics (accessed on 6 October 2022).
  61. So, W. C., Wong, M. Y., Cabibihan, J. J., Lam, C. Y., Chan, R. Y., & Qian, H. H. (2016). Using robot animation to promote gestural skills in children with autism spectrum disorders. Journal of Computer Assisted Learning, 32(6), 632–646. [Google Scholar] [CrossRef]
  62. Tommy, C. A., & Minoi, J. L. (2016, December 4–8). Speech therapy mobile application for speech and language impairment children. 2016 IEEE EMBS Conference on Biomedical Engineering and sciences (IECBES) (pp. 199–203), Kuala Lumpur, Malaysia. [Google Scholar] [CrossRef]
  63. Tricco, A. C., Lillie, E., Zarin, W., O’Brien, K. K., Colquhoun, H., Levac, D., Moher, D., Peters, M. D. J., Horsley, T., Weeks, L., Hempel, S., Akl, E. A., Chang, C., McGowan, J., Stewart, L., Hartling, L., Aldcroft, A., Wilson, M. G., Garritty, C., … Straus, S. E. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7), 467–473. [Google Scholar] [CrossRef] [PubMed]
  64. Tversky, B., Morrison, J. B., & Betrancourt, M. (2002). Animation: Can it facilitate? International Journal of Human-Computer Studies, 57(4), 247–262. [Google Scholar] [CrossRef]
  65. Vlachou, T. I., Kambanaros, M., Plotas, P., & Georgopoulos, V. C. (2024). Evidence of language development using brief animated stimuli: A systematic review. Brain Sciences, 14(2), 150. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA flow diagram of selection process.
Figure 1. PRISMA flow diagram of selection process.
Languages 11 00024 g001
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
CriterionDescription
PopulationChildren aged 0 to 18 years regardless of typical or atypical development, race, gender, language, or socioeconomic status
ConceptUse of animation for the development of an assessment game, task, or test for children
ContextFormal and informal assessments at any stage of the research process aimed at evaluating speech, language, communication, or related cognitive skills
Administration manner/SettingsFace-to-face or remote video platforms/Home, clinic, school, etc.
Study designsQuantitative research. Experimental and quasi-experimental studies, developmental studies, and psychometrical and standardization studies
LimitsPapers in English language since 2000 without local restrictions.
Exclusion criteriaStudies exploring Theory of Mind (ToM), first and second language instruction, television animation viewing, eye-tracking, virtual reality, and avatars
Studies that used animations without a clear assessment purpose
Qualitative research, opinion reports, and gray literature
Table 2. Search in PubMed database.
Table 3. Variables explored in scoping review.
Table 3. Variables explored in scoping review.
Study Name/Description Study Design Research ObjectivesParticipants Areas of Assessment Target Population Purpose of Introducing AnimationResults
Gazella and Stockman (2003)A story with animated puppetsBetween-groups design(a) To examine the possibility of standardizing a story-retelling task. (b) To determine whether the story presentation modality differentially influenced the children’s performance.29 TD children (4; 2 to 5; 6 years)Language—grammatical and lexical skillsLanguage-delayed childrenTo provide a more realistic representation of the characters and actions, enhancing story comprehension.The audio-only group did not differ significantly from the audiovisual group in narratives or responses to direct questions.
Puolakanaho et al. (2004)Heps-Kups Land—Computer animation assessment programLongitudinal developmental designTo provide a replication of previous group comparisons in a well-controlled and large sample of children with and without risk for dyslexia.98 children at risk for dyslexia and 91 TD children (3.5 years)Language development—phonological awarenessChildren with reading difficulties and/or dyslexiaTo motivate children, revealing their phonological awareness skills.(a) The control group manifested higher mastery than the at-risk group in phonological awareness. (b) Phonological awareness can be assessed long before reading instruction. (c) The tasks can be improved psychometrically.
Bishop et al. (2009)Animation description taskWithin-group designTo compare the sensitivity of different fTCD paradigms and to evaluate the novel task.21 TD children (4 years), 33 adultsCerebral lateralization during spoken language generationYoung children with neurological conditions or non-clinical research samplesTo engage the children.The task appeared as valid and reliable as the other methods.
McGonigle-Chalmers et al. (2013)The Eventaurs gameSingle-case experimental designTo answer the following question: to what extent are nonverbal children with autism capable of overcoming the inherent executive demands of phrase construction via language-like production?9 low-functioning children with profound expressive language impairment and autism (5 to 17; 6 yearsSyntactic awarenessChildren with ASDTo depict an event as the result of a syntactically correct response in an engaging and motivating way that draws the attention of children.(a) Νo child lacked syntactic awareness. (b) Εlementary syntactic control in a non-speech domain was superior to that manifested in their spoken language.
Alt (2013)Two computer games with animated dinosaursMixed (between-groups design and within-group design)To examine whether children with SLI have impaired visual fast mapping skills and identify components of visual working memory contributing to the deficit.25 children with SLI and 25 TD children (7 and 8 years)Visual fast mapping skills and visual working memoryChildren with SLIThe visual features of the animated stimuli had to be identified by the children.(a) There was evidence for impaired visual working memory skills for children with SLI, but not in all conditions. (b) There was no evidence that children with SLI were more susceptible to high-complexity information. (c) There was no evidence that children with SLI had limited capacity for visual memory.
Klop and Engelbrecht (2013)Animated video storyQuantitative, comparative, between-subjects paradigmTo investigate whether a soundless animated video would elicit better narratives than a wordless picture book.20 TD children (8; 5 to 9; 4 years)Narrative—story generationChildren with language impairmentsTo be compared with static pictures.Both visual presentation modalities elicited narratives of similar quantity and quality.
Polišenská and Kapalková (2014)Nonword Repetition TaskCross-sectional developmental designTo develop a language assessment using consistent recorded nonword stimuli. To establish task validity and reliability. To examine the influence of environmental factors.391 TD children (2 to 6 years)Language skills and phonological processChildren with language deficitsTo increase compliance, participation, and engagement.The task offers an objective delivery of recorded stimuli. It engages young children and provides high compliance rates. High levels of reliability.
Hodges et al. (2017)The Monosyllable Imitation Test for ToddlersMixed (between-groups design and within-group design)To investigate whether the stimulus characteristics have independent and/or convergent influences on imitation accuracy. To examine non-compliance rates and diagnostic accuracy.21 typically developing (TD) children and 21 children identified as late talkers (25 to 35 months)Verbal imitation abilities, language development, and speech productionToddlers with speech production difficultiesTo provide reasons for imitation in an engaging and pragmatically motivating context.(a) Stimuli characteristics influenced imitation accuracy. (b) Good compliance rates. (c) Reasonable diagnostic accuracy.
Badcock et al. (2018)What Box taskWithin-group designTo report a new method for eliciting language production for use with fTCD.95 TD children (1 to 5 years), 65 adults (60 to 85 years)Language lateralizationPopulations in which complex instructions are problematic (according to authors)To engage children and to elicit covert or overt language.The task was successfully employed in children using fTCD and showed medium correspondence with the Word Generation task.
Frizelle et al. (2018a)Animated sentence-verification taskWithin-group designTo examine the effect of two assessment methods.103 TD children (3; 6 to 4; 11 years)Language development and comprehension of complex syntaxChildren with receptive syntax deficitsTo present the items in a pragmatically appropriate context that reflects how language is processed in natural discourse, allowing real-time processing. To reduce the memory demands and the interpretation of adult-created pictures.(a) Better performance on the animated sentence-verification task. (b) Each testing method revealed a different hierarchy of constructions. (c) The impact of testing method was greater for some syntactic constructions than others.
Frizelle et al. (2018b)Test of Complex Syntax–ElectronicBetween-groups designTo assess how children with DS understand complex syntax.33 children with DS, 32 children with cognitive impairment, 33 TD children (the groups were matched on non-verbal mental age)Language—understanding of complex syntaxChildren with DSTo minimize non-linguistic demands.(a) Children with DS performed “more poorly” on most of the sentences than both control groups. (b) DS status accounted for a significant proportion of the variance over and above memory skills. (c) TECS-E needs to be normed and standardized.
Diehm et al. (2020)Animated storiesWithin-group designTo investigate the effect of story presentation format (static picture book vs. animated video).73 TD children (3 to 5 years)Language development—narrative retellsChildren with language disordersTo prompt narrative elicitation.Children performed significantly better when retelling the animated stories.
Petit et al. (2020b)Congruent and incongruent animationsWithin-group designTo assess the reliability of neural signals in response to semantic violations.20 neurotypical children (9 to 12 years)Language comprehension/Lexical–semantic processingMinimally verbal children with autismTo increase children’s engagement and to build up strong semantic contexts.Children exhibited heterogenous neural responses.
De Villiers et al. (2021)Quick Interactive Language Screener: English–SpanishBetween-groups designTo discuss theoretical and methodological problems in test development. To offer a process for developing, validating, and norming.362 TD children, dual-language learners (3 to 5; 11 years)Language comprehension and language process in both languagesBilingual children who are at risk for language difficultiesTo provide a more precise depiction of event sequences and actions that may be challenging for young children to glean from still pictures.The screener is a viable option for fair testing of bilingual Spanish–English children.
Krok et al. (2022)Sentence Diversity Priming TaskWithin-group designTo provide the rationale for assessing sentence diversity, describe the task, and present preliminary analyses of compliance and developmental associations.32 TD toddlers (30 to 35 months)Sentence diversityChildren at risk for DLDTo simulate a familiar and enjoyable parent–child interaction context. To depict familiar actions to toddlers.The task holds toddlers’ attention, reveals robust individual differences in their ability to produce sentences, is positively correlated with parent-reported language measures, and has the potential for assessing children’s language growth over time.
Pace et al. (2022)Quick Interactive Language ScreenerBetween-groups designTo examine the classification accuracy.Study 1: 67 children—54 with DLD and 13 TD (3; 0 to 6; 9 years); Study 2: 126 children—25 with DLD and 101 TD (3; 1 to 5; 11 years)Language comprehension and language processChildren with DLDTo present various syntactic structures.Findings support the clinical application of the QUILS for identifying DLD.
Jackson et al. (2023)Quick Interactive Language Screener: ToddlersBetween-groups designTo develop a behavioral measure of children’s language capabilities at age two.Phase I (Pilot): 174 children (22; 7 to 39; 1 months); Phase II: 252 children (2; 0 to 2; 11 years); Phase III: 448 children (2; 0 to 2; 11 years)Language comprehension and language processChildren at risk for language impairmentTo portray verbs and events requiring children to make fewer inferences from static pictures.The screener could support research studies and facilitate the early detection of language problems.
Lloyd-Esenkaya et al. (2024)Zoti’s Social ToolkitWithin-group designTo develop a measure which assesses emotional understanding and conflict resolution with minimal reliance on language skills. To investigate the face and construct validity of the measure.Study 1: 91 TD children (9 to 11 years); Study 2: 5 TD children (7 to 8 years); Study 3: 20 TD adults (18 to 25 years); Study 4: 20 TD and 22 DLD children (7 to 9 years)Emotion inferencing and conflict resolution knowledgeChildren with DLDTo ensure participants can fulfill the requirements of the task even if they have a language disorder.The final toolkit is suitable for children with and without a language disorder.
Note: TD, typically developing; fTCD, functional transcranial Doppler ultrasonography; ASD, autism spectrum disorder; SLI, specific language impairment; DS, Down syndrome; DLD, developmental language disorder.
Table 4. Features of animations.
Table 4. Features of animations.
Study Name/Description Original or Commercially Available Technical Description Speed of Frames Duration Examples of Animation Auditory Input Theoretical or Research Basis for the Use of Animation
Gazella and Stockman (2003) A story with animated puppetsOriginalVideo-based animation with puppetsNo report; smooth transitions between scenesNo reportNoYesThe authors relied on the relevant literature on video-based story presentations
Puolakanaho et al. (2004)Heps-Kups Land—Computer animation assessment programOriginalLike spontaneous speech gameNo reportNo reportNoYesThe task types presented in the literature were modified and embedded within a computer animation context
Bishop et al. (2009)Animation description taskOriginalAnimated cartoon clipsNo report30 clips per 12 sNo (available on request)Yes (sounds, no speech)Previous pilot studies were reported
McGonigle-Chalmers et al. (2013)The Eventaurs gameOriginal3D-colored animations using Macromedia Flash 5.3No reportNo reportYes (figures)YesThe game is based on a previous study (McGonigle & Chalmers, 2002)
Alt (2013)Two computer games with animated dinosaursOriginalShort animated scenes using Adobe Flash software.No report10 vignettes per 10 sYes (figures)NoNo report
Klop and Engelbrecht (2013)Animated video storyOriginalA simulation of the compared color picture book; animation effects depicted character movements, facial speech movements, and fading between pictures; no distracting backgroundsNo reportTotal time 2 min; one picture at a timeYes (pictures)NoThe authors relied on the literature comparing animated versus static picture presentations
Polišenská and Kapalková (2014)Nonword Repetition TaskOriginalA color necklace simulation where the beads were introduced through PowerPoint software animated effects; the task was embedded in a short storyNo report1.33 minYes (video)YesNo report
Hodges et al. (2017)The Monosyllable Imitation Test for ToddlersOriginalTwo computer-animated stories were created using the iPad applications PhotoPuppet HD and iMovieNo report2.24 min per episodeNoYesNo report
Badcock et al. (2018)What Box taskOriginalAnimation of a face “searching” for an object; created with a series of still-frame imagesA detailed schematic diagram of the task time periods and animation frames is providedTotal task time 34 sYes (figure)YesThe task was based on previous literature on fTCD methods
Frizelle et al. (2018a)Animated sentence-verification taskOriginalNo reportNo report6 s averageYes (video)
YesThe authors relied on previous literature on truth-value-judgment/sentence-verification assessments
Frizelle et al. (2018b)Test of Complex Syntax–ElectronicOriginalNo reportNo report6 s averageYes (video)YesThe study was based on Frizelle et al. (2018a)
Diehm et al. (2020)Animated storiesAvailable onlineVideo-based animationNo report2.15 to 2.35 minNoYesThe authors relied on previous literature regarding multimedia features
Petit et al. (2020b)Congruent and incongruent animationsOriginalShort, colorful animated cartoons developed using Adobe Photoshop CC 2017Yes3 s minimumYes (figures and videos)YesThe task was based on the prior study of Petit et al. (2020a)
De Villiers et al. (2021)Quick Interactive Language Screener: English–SpanishOriginalNo reportNo reportNo reportYes (figures)YesNo report
Krok et al. (2022)Sentence Diversity Priming TaskOriginalSimulation of a colorful animated picture book with minimal background distractions; developed with online animation tools and presented in Microsoft PowerPointNo report10 sYes (figures)NoThe task was based on the prior study of Krok and Leonard (2018)
Pace et al. (2022)Quick Interactive Language ScreenerOriginalNo reportNo reportNo reportYes (figures)YesNo report
Jackson et al. (2023)Quick Interactive Language Screener: ToddlersOriginalA game on a touchscreenNo reportNo reportYes (figures)YesThe authors relied on literature concerning animation use, developmental milestones, and modes of introduction
Lloyd-Esenkaya et al. (2024)Zoti’s Social ToolkitOriginalColored animated scenarios with characters lacking facial features or expressions; stop-motion technique: for every animation, several digitally drawn image frames are joined together using animation softwareNo report~11 sYes (figures and videos)YesThe authors relied on the study of Ford and Milosky (2008)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vlachou, T.I.; Kambanaros, M.; Terzi, A.; Georgopoulos, V.C. Animation in Speech, Language, and Communication Assessment of Children: A Scoping Review. Languages 2026, 11, 24. https://doi.org/10.3390/languages11020024

AMA Style

Vlachou TI, Kambanaros M, Terzi A, Georgopoulos VC. Animation in Speech, Language, and Communication Assessment of Children: A Scoping Review. Languages. 2026; 11(2):24. https://doi.org/10.3390/languages11020024

Chicago/Turabian Style

Vlachou, Triantafyllia I., Maria Kambanaros, Arhonto Terzi, and Voula C. Georgopoulos. 2026. "Animation in Speech, Language, and Communication Assessment of Children: A Scoping Review" Languages 11, no. 2: 24. https://doi.org/10.3390/languages11020024

APA Style

Vlachou, T. I., Kambanaros, M., Terzi, A., & Georgopoulos, V. C. (2026). Animation in Speech, Language, and Communication Assessment of Children: A Scoping Review. Languages, 11(2), 24. https://doi.org/10.3390/languages11020024

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop