Next Article in Journal
Correction: Bruni, M.E.; Khodaparasti, S. A Variable Neighborhood Descent Matheuristic for the Drone Routing Problem with Beehives Sharing. Sustainability 2022, 14, 9978
Next Article in Special Issue
A Goal-Oriented Reflection Strategy-Based Virtual Reality Approach to Promoting Students’ Learning Achievement, Motivation and Reflective Thinking
Previous Article in Journal
Vegetative Growth Dynamic and Its Impact on the Flowering Intensity of the Following Season Depend on Water Availability and Bearing Status of the Olive Tree
Previous Article in Special Issue
Computer-Based Scaffolding for Sustainable Project-Based Learning: Impact on High- and Low-Achieving Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Pedagogical Design of K-12 Artificial Intelligence Education: A Systematic Review

1
Department of Curriculum and Instruction, The Chinese University of Hong Kong, Hong Kong, China
2
Centre for Learning Sciences and Technologies, The Chinese University of Hong Kong, Hong Kong, China
*
Authors to whom correspondence should be addressed.
Sustainability 2022, 14(23), 15620; https://doi.org/10.3390/su142315620
Submission received: 30 September 2022 / Revised: 8 November 2022 / Accepted: 11 November 2022 / Published: 24 November 2022
(This article belongs to the Special Issue Educational Intelligence and Emerging Educational Technology)

Abstract

:
In response to the growing popularity of artificial intelligence (AI) usage in daily life, AI education is increasingly being provided at the K-12 level, with relevant initiatives being launched worldwide. Examining how these programs have been implemented and summarizing useful experiences is thus imperative. Although prior reviews have described the characteristics of AI education programs in publications, the papers reviewed were mostly nonempirical reports, and the analysis typically only involved a descriptive summary. The current review focuses on the most recent empirical studies on AI teaching programs in K-12 contexts through a systematic search of the Web of Science database from 2010 to 2022. To provide a comprehensive overview of the status of AI teaching and learning (T&L), 32 empirical studies were analyzed both descriptively and thematically. We analyzed (1) the research status, (2) the pedagogical design, and (3) the assessments and outcomes of the AI teaching programs. An increasing number of studies have focused on AI education at the K-12 stage, but most of them have a small sample size. Moreover, the data were mostly collected through interviews and self-reports. We reviewed the pedagogical design of AI teaching programs by using Gerlach and Ely’s pedagogical design model. The results comprehensively delineated current AI teaching programs through nine dimensions: learning theory, pedagogical approach, T&L activities, learning content, scale, teaching resources, prior knowledge prerequisite, aims and objectives, assessment, and learning outcome. The results highlighted the positive impact of current AI teaching programs on students’ motivation, engagement, and attitude. However, we observed a lack of sufficient research objectively measuring students’ knowledge acquisition as learning outcomes. Overall, in this paper, we discussed relevant findings in terms of research trends, learning content, teaching units, characteristics of the pedagogical design, and assessment and evaluation by providing illustrations of exemplary designs; we also discussed future directions for research and practice in AI education in the K-12 context.

1. Introduction

Because of the widespread adoption of artificial intelligence (AI) technology, the world is undergoing an unprecedented technological change. From its emergence in the computer science field, AI has spread across diverse fields (e.g., engineering, business, art, and science), eventually affecting many facets of human life. The application of AI (as observed, for example, in smart home appliances, cloud services, smartphones, Google-enhanced smart speakers, and devices equipped with Siri) enhances user experience, improves working efficiency, and increases the convenience of various tasks. For effective functioning in the information era, people must develop AI literacy through the acquisition of new skills [1]. The organization for economic cooperation and development (OEDC) released a report, Trustworthy artificial intelligence (AI) in education: promises and challenges, highlighting the importance of equipping students with new skillsets to enable them to thrive in increasingly automated economies and societies [2]. AI can thus be considered as an essential technological literacy for the 21st century, expanding the list of classic literacies such as digital literacy, data literacy, and information literacy [3]. Having AI literacy may encourage more students to consider AI careers and provide solid preparation for higher education and their future career. To empower students with AI literacy, an AI education ecosystem that covers all educational stages, not only the graduate and undergraduate levels, should be established [4,5]; that is, more focus should be placed on the K-12 context. In 2019, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) encouraged the exploration of the curriculum and standard dimensions of AI in K-12 education to elucidate how learners and teachers are preparing for an AI-powered world [6]. Therefore, the practical movement to integrate AI in K-12 education has been observed in various countries in recent years (e.g., the United States, the United Kingdom, Finland, China, Australia, and South Korea). Moreover, in academia, the discussion on AI education has steadily shifted from higher education to the K-12 context as well. An increasing number of studies have explored the potential of incorporating AI learning into K-12 education through playful experiences and approachable content to prepare children for an AI-saturated world and future AI-oriented workforces [7,8].
Initiatives to popularize a basic understanding of AI technologies in K-12 have been emphasized both in practice and theories. However, a systematic analysis of the approaches used to equip students with the knowledge of AI technology in K-12 classrooms is lacking. The aims of this paper are to characterize, compare, and synthesize the characteristics of the design and implementation of AI courses on the basis of current research. This paper provides an overview of pertinent constructs in AI teaching and learning (T&L) as well as identifies potential gaps and opportunities for future research.

1.1. Interdisciplinary Nature of K-12 AI Education

AI education evolved from the field of computational science at the college level. Initially, computer scientists designed and developed AI technology and integrated it into computer science classrooms. Because the development of AI technology has substantially affected people’s lives, AI education has been extended to the K-12 context and discussed as a specialized curriculum. Furthermore, AI education is an area spanning diverse disciplines, especially technology education and engineering education.
In computer science, AI is defined as any human-like intelligence exhibited by a computer, robot, or other machines; that is, AI refers to the ability of a computer or machine to imitate the capabilities of the human mind—learning from experience and examples, recognizing objects, understanding and responding to language, making decisions, solving problems, and combining these capabilities to perform functions typically attributed to humans [3]. AI knowledge originally centered on topics such as algorithms, coding, and programming in computer science courses; AI education was then gradually introduced in higher education and later also in basic education. In primary and secondary schools, AI, which was initially covered as a relevant part of computational thinking (CT) curricula, was included in science, technology, engineering, the arts, and mathematics (commonly referred to as STEAM) education [9]. CT curricula aim to cultivate young students’ competencies in solving problems through the use of programming, with the intention of preparing them for their subsequent tertiary studies and their future careers in computer science. The increasing application of AI technology powered by programming highlights the importance of introducing fundamental AI knowledge and working principles to students throughout K-12.
The field of technology and engineering education is closely related to AI education, in which constructivism is popular in T&L. The constructivist learning theory, commonly adopted in technology and engineering education, is based on the developmental theories of Piaget [10], with further elaboration by Vygotsky [11]. According to the cognitive constructivist theory, students construct the meaning of knowledge from experience [10], and Vygotsky emphasized the key role of sociocultural factors in students’ constructive learning process [11]. Pedagogical approaches founded on constructivist theories prioritize active participation and deep learning through inquiry-based, project-based, problem-based, and discovery-based activities [12]. In K-12 technology and engineering education, students are guided to establish a connection between their experience and new knowledge in the sociocultural environment, thus promoting conceptual construction when students encounter new technological or engineering concepts that conflict with their prior knowledge [13]. That is, students are encouraged to negotiate with their experience and interact with the learning community through constructivist activities to shape their conceptual frameworks and improve their literacy [14,15]. In technology and engineering classes, students are oftentimes required to complete a final product following a lesson; for example, designing artifacts or step-by-step procedures for specific tasks.
Because of its interdisciplinary nature, K-12 AI education can be regarded as emerging from computer science education, with the integration of technology education, engineering education, and knowledge from other fields. The fact that AI is a mixture of various disciplines brings about the challenge of scoping AI in the K-12 context. However, consensus has yet to be reached on the specific content of K-12 AI education. At the undergraduate and higher levels, traditional AI education focuses on teaching algorithms and their background, but for the K-12 audience, the boundaries of AI education seem to be broader. K-12 AI education emphasizes not only the technical functioning of computers but also its social construction process related to the knowledge of technology, engineering, science, even humanity, and sociology [4,5,6,7].

1.2. Existing Reviews of K-12 AI Education

Although AI teaching initiatives in K-12 date back to the 1970s [16], AI teaching has grown tremendously in popularity in the past few years [17]. UNESCO organized the Workshop on Teaching and Learning Competencies for Artificial Intelligence (AI) from an Information Access Perspective to examine the elements necessary to support teachers’ and learners’ capacity development for AI use [6]. In response to UNESCO’s call to action, various K-12 AI projects and activities have been initiated worldwide (e.g., in the United States, United Kingdom, Finland, China, Hong Kong, Singapore, South Korea, India, and Australia). Governments and universities globally have begun to collaborate to support the introduction of AI in K-12 settings. The teams involved in these K-12 AI projects consist of policymakers, software developers, technological experts, educators, and frontier teachers. One representative project is the AI for K-12 Working Group (AI4K12), which proposed the “Five Big Ideas” as a framework to develop guidelines for teaching AI to K-12 learners. This framework includes five main components: perception, representation and reasoning, learning, natural interaction, and societal impact [18]. Various studies focusing on the design of AI curricula for the K-12 context have adopted the Five Big Ideas as their framework.
A number of studies emerged responding to the increasing interest in AI education. Initial research on this topic focused on the incorporation of AI education into regular K-12 subjects (e.g., science, mathematics, and physics). Research in this stage explored the intersections of AI and other core K-12 subjects to facilitate integration into the classroom. With the increasing need for K-12 AI education, studies on what and how AI knowledge should be taught in K-12 have emerged within the context of specific AI-related courses. The notion of K-12 AI education has thus evolved over the years, which calls for a thorough review of relevant studies to guide future research.
An exploratory review from Zhou and colleagues provided evidence on how AI literacy guidelines have been applied in K-12 contexts [19]. In their review, K-12 AI education was defined as the integration of AI knowledge into core curricula, not as an independent course. Several design guidelines for creating an AI learning experience in K-12 were identified: student engagement, built-in scaffolding, teacher and parent involvement, equity diversity, and inclusion.
K-12 AI courses have evolved into teaching machine learning (ML), one subfield of AI, to students. Sanusi and Oyelere conducted a review to identify potential pedagogical frameworks suitable for learning ML in K-12 education [20]; they described various pedagogical tactics, such as problem-based learning, project-based learning, active learning, participatory learning, interactive learning, inquiry-based learning, personalized learning, and design-oriented learning. This provided a comprehensive overview of the pedagogical frameworks adopted in K-12 education for ML; however, a detailed analysis of how teachers employed these pedagogical frameworks in their teaching practice was absent.
Similarly, Marques et al. reviewed 30 papers in a systematic mapping study to explore how to teach ML to K-12 learners [17]. They identified 30 courses or programs in these papers and examined their characteristics. The authors observed three main features: (1) competencies can range from basic (knowing what ML is) to more complex (understanding specific ML techniques); (2) ML concepts are introduced by focusing on the most accessible processes; (3) instructional materials in addition to customized frameworks and tools are available for free. This review provided a basic understanding of current K-12 teaching practice related to ML; however, the courses or programs examined in this review focus exclusively on ML and only as part of a comprehensive AI course. Moreover, the main aim of this review focused specifically on the learning content (e.g., ML topics, concepts, processes, learning styles, application domains, frameworks, and data types) but neglected to provide a fine-grained description of how the instructional design is developed.
With the development of K-12 AI courses, new tools for teaching AI in schools emerge regularly. Therefore, with reference to the review of Marques et al. [17], Gresse von Wangenheim and colleagues further investigated the key role of visual tools in teaching ML by reviewing 16 papers [21]. Their findings indicated that these visual tools provided students with opportunities to construct a comprehensive ML process, thus helping them to develop a more accurate understanding of ML concepts. The authors also highlighted the lack of collaborative learning during the development of such ML models and the dearth of performance-based assessment of these created ML models as key emerging pedagogical concerns in teaching AI knowledge by visual tools.
The aforementioned reviews explored the characteristics of teaching ML to K-12 learners; however, because ML is only one part of the AI field, these studies failed to provide a comprehensive picture of the AI technology landscape. The review conducted by Su et al. presented a delineation of educational approaches for teaching AI technologies at the K-12 level in the Asia-Pacific region [22]. In that review, K-12 AI education was defined as an independent curriculum providing comprehensive knowledge. By mapping current AI tools, AI activities, educational models or theories, and research outcomes, the review outlined the status of AI curricula developed for K-12 classrooms. The findings of this review indicated that these AI curricula had a positive influence on students’ learning outcomes, interest in AI courses, AI-related skills, and learning attitudes. Nevertheless, the studies covered in that review were conducted only in Asian regions. In response to increasing research attention on the K-12 AI education field, an updated review of studies on the characteristics of K-12 AI education worldwide is necessary.
Relevant review papers have elucidated the design and development of K-12 AI courses from diverse research foci, such as AI knowledge to be taught, instructional strategies to be used in the pedagogical process, multimedia tools to be adopted to support learning visualization, etc. Nevertheless, because of the varied perspectives of these studies, an in-depth and full-scale understanding of the pedagogical characteristics of K-12 AI education remains elusive. Some of the limitations of the aforementioned reviews are as follows: (1) they included only a few relevant studies for review and analysis; (2) they lacked a strict search process to ensure the quality of reviewed papers; (3) there was a lack of quantitative description in selected papers; (4) they recognized AI courses as the integration with other subjects, not developing as an independent course; (5) they regarded ML as the only relevant knowledge and neglected other parts of AI technology necessary to be introduced in K-12 classrooms; (6) they identified teaching approaches in K-12 AI courses but lacked elaboration on how to organize and implement these approaches; (7) they verified the effectiveness of existing teaching approaches for AI knowledge but considered only some world regions.
Considering the aforementioned limitations, the purpose of this study was twofold: (1) to systematically review high-quality empirical research focusing on K-12 AI courses or programs in the world range, and (2) to explore future research orientations in terms of the pedagogical design and implementation of K-12 AI curricula. This review study provides insights on how to design, construct, and implement AI courses for K-12 instructional designers. Furthermore, the present results may inform researchers of the evolution and improvements in the field of AI pedagogy. In this paper, we delineated effective approaches to investigating K-12 AI curricula worldwide and proposed suggestions for the inclusion of AI education in K-12.
To examine and synthesize the characteristics of AI education for K-12 students from multiple perspectives, three review questions guided our examination of the current status of relevant studies on teaching AI in K-12 settings. These three review questions (RQs) were refined using the following analysis questions (AQs):
RQ1: What is the status of research in teaching AI in K-12?
AQ1: What are the major research trends, regions, and scales?
AQ2: What are the major research methods and settings?
RQ2: What are the pedagogical characteristics of current AI teaching units?
AQ3: What is the scale (target audience, setting, duration) of the teaching unit?
AQ4: What content is selected for the teaching unit?
AQ5: What prior knowledge and skills are required for students to take the course?
AQ6: What are the pedagogical theories that guide the teaching unit? What are the pedagogical approaches and T&L activities in the teaching unit?
AQ7: What are the tools and materials used in the teaching unit?
RQ3: What are the evaluation methods and the outcome of the teaching units?
AQ8: How are students’ learning outcomes assessed?
AQ9: What are the learning outcomes of the teaching units?

2. Method

We conducted a systematic review of published empirical studies to identify the latest evidence on the pedagogical characteristics of AI education in the K-12 context. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [23] to collect, select, summarize, analyze, and interpret the empirical evidence related to the review questions.

2.1. Literature Search

The literature search was conducted in May 2022 by using the Web of Science database. To address the review questions and ensure the relevance of the studies, we reviewed while also identifying as many potential papers as possible. We used three types of search terms in the title, abstract, and keywords: “AI”, “education”, and “school level” terms. The search terms used to extract relevant articles are detailed in Table 1. We only considered papers published between 2010 and 2022, and placed no restriction on the source of publications. Moreover, we examined other sources that might not be indexed in the database at the time we conducted the literature search, including the reference lists of previous reviews and the latest editions of relevant journals such as Computers and Education: Artificial Intelligence. Our literature search yielded 4676 publications after duplicates were removed.

2.2. Inclusion and Exclusion Criteria

Eligible papers must: (1) focus on AI education or AI T&L; that is, not the application of AI technology in education; (2) explicitly address teaching or learning strategies and plans, or provide detailed information on how to conduct the courses or projects related to AI T&L; (3) be conducted in the K-12 context; (4) be an empirical study that includes comprehensive descriptions of the research question, settings, participants, and results; (5) be published in a peer-reviewed journal or conference, written in English, and full-text accessible.
Criterion 1 was used to conduct the first screening of all articles identified in the database and other resources by reviewing the title, abstract, and keywords. Notably, most of the studies investigated the effects of using AI technology in different educational settings (n = 4106) and were thus excluded. Other excluded studies were those unrelated to AI, such as studies on the cognitive strategies applied in deep learning (n = 64) and papers focused on the introduction of AI technology or algorithms themselves (n = 393). After the exclusion of two papers without full-text accessibility, the preliminary searching and screening yielded 111 publications for further review.
Criterions 2–5 were then applied to assess the relevance and quality of these 111 publications. The full-text version of each article was retrieved and reviewed to determine whether the study contained descriptions of T&L strategies or plans, or detailed information on how to conduct AI courses or projects. In total, 28 studies were excluded following the application of criterion 2. Other papers were excluded because they were not in English (n = 3), not empirical studies (n = 41), or not targeted at students but instead teachers (n = 6). After the exclusion of one paper for poor methodological quality, 32 papers (Appendix A) remained for analysis in this systematic literature review. Figure 1 illustrates the search and selection process.

2.3. Data Coding

To answer the review questions, we considered the theoretical grounding and terminology used in relation to AI T&L. First, we defined the term “teaching units” to represent the AI courses or projects in the included studies. Teaching units can take various forms; for example, a set of classes in a single course, workshops, a series of lessons, or even a complete curriculum. Regardless of the scale, any teaching unit should include instructional materials and designs for both teachers and students, with the aim of teaching certain content in a specific context [24]. The development of teaching units typically follows the guidelines of pedagogical design, which represents an iterative process of planning and selecting learning content, materials, and tools; designing instructional strategies; and considering the evaluation of teaching units [25].
To obtain a comprehensive description and understanding of those teaching units, we adopted the pedagogical design model of Gerlach and Ely [26] as our analytical framework (Figure 2). Thus, we considered the following 10 elements of pedagogical design: specification of content, specification of objectives, assessment of entering behaviors, determination of strategy, organization of groups, allocation of time, allocation of space, selection of resources, evaluation of performance, and analysis of feedback. This pedagogical design model was selected as a lens to enable us to unpack the constructs of AI teaching units. We selected this model for the following reasons: First, it has proven valuable since it was proposed and is regarded as being sufficiently comprehensive for examining the design and implementation of learning [27]. Second, although first proposed in 1971, the model remains relevant in diverse research settings, including the face-to-face context [28,29] and online T&L more recently [30]. The model is regarded as sufficiently broad and flexible for pedagogical design and evaluation.
For the purpose of the present study, the model was adapted for the context of K-12 AI education. Specifically, our review was guided by nine components. Notably, in our model, the determination of strategy, one of the original 10 components, was further elaborated by learning theory, pedagogical approach, and special T&L activity. Table 2 illustrates the differences between the adapted version and the original model.
On the basis of this framework, we developed a coding scheme containing three dimensions to analyze the reviewed papers (Table 3). The first dimension was general information on the papers, including the year of publication, the regions addressed in the study, and the name of any projects. The second dimension was research information, including research questions, participant information, sample size, type of research, and data source. The third dimension was pedagogical design, which was based on the analytical framework [26]. We considered the following 11 variables: aims and objectives, T&L setting, course duration, learning content, pedagogical approach, learning theories, prior knowledge prerequisites, special T&L activities, materials and tools, evaluation method, and learning outcomes.
To answer the research questions, we first conducted a descriptive analysis of the papers according to the elements in the coding list. Subsequently, we performed content analysis and thematic analysis on the reviewed papers to further illustrate how AI knowledge is taught and what effects AI teaching has. Open coding was employed to generate an initial idea regarding the features of the codes and organize data into meaningful groups [31]. Thereafter, focused coding was used to condense the long list of different codes, sort the different codes into potential themes, and collate all the relevant coded data extracts within overarching themes. Finally, the themes were reviewed to verify that they form a coherent pattern and were clearly defined for subsequent analysis. The coding process was predominantly recursive, involving back-and-forth movement between phases instead of a linear process: the original data were read, multiple codes were compared, and opportunities for refining codes were identified [32].

3. Results

3.1. Status of Research on Teaching AI in the K-12 Context

The growth in the number of relevant publications revealed an increasing research interest in teaching AI in K-12 (Figure 3). Notably, most of the publications until 2020 were conference papers. The journal articles (11 of 32) have been published mostly since 2021. Despite the small number of studies in this field, relevant research has been conducted in diverse countries (Figure 4); we identified 13 target countries or regions in Europe, Asia, and the US. This reflects a global trend in the development of AI education at the K-12 stage. In fact, some of these countries, namely the United States, Finland, and India, have enacted national policies to support the development of AI education.
Regarding the scale and the type of selected papers, 11 studies were small-scale research, enrolling less than 20 participants. The remaining studies enrolled no more than 100 participants. Regarding research type (Figure 5), most of the studies (25 of 32) employed a qualitative or mixed-method approach; only four studies were purely quantitative. The quantitative part of these studies principally focused on learners’ motivation and attitudes toward the K-12 AI courses, using Likert-scale surveys for measurements. Only three studies included students’ learning outcomes as part of the quantitative analysis, one adopted students’ self-perceived competence as the assessment criteria for the knowledge acquisition of AI [33]; therefore, only two studies employed an objective quantitative assessment of students’ knowledge acquisition by using scored tests [34,35]. Most of the studies qualitatively assessed students’ learning outcomes through classroom observations, interviews, and artifact analysis.

3.2. Pedagogical Characteristics of Reported AI Teaching Units

3.2.1. Scale (Target Audience, Setting, and Duration) of the Teaching Unit

Target Audience

The teaching units identified cover from preschool to secondary school phases, most are situated in the upper primary school and lower secondary school levels (Figure 6). Very few studies have simultaneously considered the primary and secondary school levels. For example, three studies [9,36,37] targeted students in both upper primary and lower secondary schools, whereas only one included students from lower primary to lower secondary school levels. Although AI education is traditionally considered to be suitable for older adolescents, numerous teaching units are available for the primary and even preschool levels. For example, in their two studies, Williams et al. trained young children aged 4–6 years on AI concepts related to knowledge-based systems, supervised ML, and generative AI through interactions with social robots [8,38]. Another study by Vartiainen and colleagues explored how young children (age 3–9 years) can use Google’s Teachable Machine in non-school settings [39]. Exposing children to AI at these earlier educational stages thus appears to be feasible and beneficial.

Setting

Most of the reported teaching units involve in-person and extracurricular activities (Figure 7) that are conducted in lab settings or in workshops or after-school lessons, suggesting that AI education is still not typically included in regular classrooms, even being absent in technology and computing-related courses. However, three teaching units in five of the included studies are applied in regular classrooms [40,41,42,43,44]. One teaching unit in Finland is implemented as a part of regular curricular activities [40,41,42]; one teaching unit in Spain involves AI education in regular middle school classes [44]; and one teaching unit in Denmark critically emphasizes the social impact of AI and is taught in regular social studies classes [43]. Five studies involved AI education conducted wholly online or in a blended mode in response to the COVID-19 pandemic [34,36,45,46,47]. None of the studies employed asynchronous online teaching.

Duration

The duration of the teaching units varies considerably. The teaching units can involve short and focused activities (minimum lesson duration: <1 h), multiple extracurricular lessons (3–15 h in total), and long-term courses (maximum lesson duration: ~80 h). However, most are relatively short-term courses (Figure 8). Half of the teaching units (16 of 32) in the reviewed studies comprise only one session lasting no more than 3 h, suggesting that a short lesson may be sufficient for children to develop an understanding of AI concepts e.g., [33,48,49]. The second most popular lesson duration is multiple sessions on different days, typically no more than 15 h or 7 days in total. The teaching units of 11 studies were taught in this mode. Only four studies designed a long-term curriculum spanning several months, with three of these studies being published in 2022.

3.2.2. Learning Content, Tools and Materials, and Prior Knowledge Prerequisites of the Teaching Units

The learning content of the teaching units can be categorized into four dimensions: (1) introduction to AI and its basic concepts; (2) experience and exploration of AI; (3) traditional ML; and (4) ethical and societal impacts of AI. Most teaching units include multiple or even all of the dimensions. All of the teaching units include an introduction to AI and its basic concepts; in the following text, examples of the other three dimensions in teaching units are provided. First, several teaching units only focus on experiencing and exploring AI, e.g., [8,38,50], and the target audience is young children. The learners were instructed to experiment with AI-supported techniques such as quick draw by Google. Through free exploration, the learners developed a preliminary understanding of what AI is and what can be achieved using it. Experience and exploration are also frequently applied in other teaching units as a means of introducing students to new concepts.
Second, the content of most of the teaching units is centered on ML. Various types of ML are explored, including image recognition [33,42], emotion recognition [39], gesture recognition [49,51], object recognition [33], k-means clustering [48,52], supervised learning [37,38], neural networks [52], and natural language processing [53]. Most teaching units only cover the most accessible processes such as data management and practicing model learning and evaluation. Notably, ML-related concepts are only presented at the surface level. Most underlying ML processes are black-boxed. For example, the learners operated a predefined model included in well-developed programs or apps, without further exploring relevant ML processes. Only a few teaching units systematically introduce ML performance measures such as correctness tables, confidence graphs, or algorithms within the ML process. Specifically, the teaching units of only four studies [43,52,53,54], all set in the secondary school context, covering the design or validation of ML algorithms.
Teaching units with AI recognition–related content yielded some noteworthy observations, especially in terms of data source. Data are critical for AI, especially all kinds of machine learning. Two types of data sources were identified, system-generated or expert-provided data, and student-generated data. For example, in the teaching unit of [55], the Mongo market problem with image recognition is typical expert-provided data. Students trained the computer to identify the sweetness of mangoes with pictures provided by the instructor. By contrast, the teaching unit of [45] is an example of using student-generated data to design custom gestures for their toys and construct gesture-recognition ML models to trigger their own sounds. The aforementioned two studies suggested that using real-life and student-generated data such as motion capture, gestures, facial expressions, and voice recording had benefits for students’ motivation and engagement. However, this type of data set also has more noise and unwanted features than data sets produced by domain experts, potentially degrading the accuracy of ML outcomes.
Third, several teaching units also consider the ethical and societal impacts of AI. Typical examples are [34,56], in which student projects emphasize the ethical concerns and limitations of using AI. Moreover, all of the teaching units involving multiple sessions or long-term studies include AI ethics and societal implications as key learning goals for students.
Many tools have been reported to support AI learning, with Machine Learning for Kids [57], Google Teachable Machine [58], and TensorFlow [59] being commonly mentioned examples. In terms of programming environment, block-based programming environments including Scratch, App Inventor, and Snap! are the most frequently used platforms for constructing ML models. Two of the reviewed teaching units directly employ Python, a text-based programming language [35,53]. In addition to general tools that are well-designed for novices to experience AI and ML, some teaching units include researcher-developed materials and tools. Typical examples are SmileyCluster, a hands-on and collaborative learning environment by [48], and Zhorai, a conversational platform with an online web interface by [60]. Some tangible resources such as robots, programmable toys, and diverse types of real-life sensors have also been used to teach AI and ML [8,38].
A key concern in AI education is whether programming skills and advanced mathematical knowledge are required as a prerequisite for taking the course. For example, as mentioned, many teaching units only have a one-session design, and they recruit participants who either have already taken other information and communications technology courses or have received training on Scratch or Python. Long-term teaching units, such as [53,54], typically include instruction on programming skills; thus, participants are generally not required to have prior knowledge. Moreover, advanced mathematical knowledge is typically not required for most teaching units because the ML concepts are only presented at the surface level, with many of the underlying ML processes being black-boxed, especially for young learners. However, some teaching units, generally those aimed at secondary school students, such as [53], include attention to ML algorithms, which require more advanced mathematical knowledge. To conclude, whether prior knowledge is a prerequisite depends considerably on the duration and the learning content of the teaching units.

3.2.3. Learning Theory, Pedagogical Approach, and T&L Activities of the Teaching Units

The selected papers included comprehensive descriptions of the learning theories that were referenced, the pedagogical approaches that were employed, and the T&L activities that were implemented. However, different terms are commonly used to describe the same thing. Consequently, after the initial coding of the reviewed studies, we first identified four types of learning theories and nine types of pedagogical approaches. The four learning theories are behaviorism, cognitivism, (social) constructivism, and constructionism, and the nine pedagogical approaches are direct instruction, hands-on activity only, interactive learning, collaborative learning, inquiry-based learning, participatory learning, game-based learning, project-based learning, and design-oriented learning. The relationship between learning theories and pedagogical approaches is illustrated in Figure 9. In Table 4 and Table 5, the learning theories and pedagogical approaches, respectively, are briefly introduced.
Acknowledging that these learning theories and pedagogical approaches are not mutually exclusive, we present the frequency of different pedagogical approaches employed in the teaching units in Figure 10. Referring to behaviorism learning theory, direct instruction is used in all the teaching units, to different extents, especially in the initial part of the teaching units when foundational AI knowledge is presented. Lectures, videos, tutorials, and demonstrations are the most common T&L activities in the initial stage. Notably, the authors of two studies [49,53] specifically highlighted the importance of including direct instruction in AI education. Fifteen of the reviewed studies employed interactive learning or hands-on only as the main pedagogical approach, representing the second most common approach. The key features of these teaching approaches are that they help students gain hands-on experience and practice with technology, usually focusing on a given procedure or target. In this approach, students do not have much freedom to choose their learning sequence or subgoals.
Most of the studies also claimed to guide students to go through hands-on projects, referring to the learning theory of constructionism, where the application level is emphasized, and artifact construction is the learning objective. Artifact construction involves two types of problems or tasks: (1) well-defined problems with expected solutions and (2) ill-defined problems without a known solution. The second type of problem is designed to encourage students to create their own practical solutions, thus stimulating students’ higher-order thinking. In artifact construction, the typical approaches are project-based learning and design-oriented learning. Despite sharing common features with the artifact creation approach, the design-oriented approach, which emphasized design elements, e.g., [40,41,49], merits some discussion. This approach involves open-ended goals, with children being prompted to design their own projects according to their own interests and experience. In contrast to teacher-assigned problems or projects, such student-driven projects typically do not have absolute answers in terms of the designs. With consideration of each student’s knowledge level, teachers provide assistance as facilitators to help the students transform their designs into executable projects. In design-oriented learning, students have more autonomy than in typical project-based learning.
The third pedagogical approach listed in Table 5—interactive learning—concerns social interaction and social constructivism. A typical example of this type of approach is participatory learning, wherein the role of social interaction is emphasized. For example, in study [9], where a role-playing game was developed, students were not learning from playing a game, but instead, they played different roles in an AI system, namely the designer, the user, and the AI itself. Another example is observed in study [34]: students played the role of nodes in a neural network formed by students sitting in predefined rows, thereby representing the neural network layers. Thus, students learn by experiencing the different roles of AI. Other frequently used approaches include collaborative learning, which explicitly involves group work or paired work, game-based learning [64,65], and inquiry-based learning [35,66].
We defined T&L activities as detailed and more specific tasks within a pedagogical approach. For example, with project-based learning, one common activity is hands-on practice in constructing AI artifacts. In addition to the commonly used activities that have been mentioned in Table 4 and Table 5 when describing learning theories and pedagogical approaches, some unique T&L activities merit additional discussion. First, in unplugged activities, as adopted in study [64], students learned data management and decision tree algorithms by practicing their computational thinking through group work prior to actual programming; this process involved decomposition, pattern recognition, abstraction, and algorithms, thus cultivating logical thinking and problem-solving skills. Another unplugged example is “Be the machine” [34], a team role-playing game that taught how ML works; in this game, each team member assumed a different role to manually train an ML model. Second, brainstorming involving pictures can be included when a design-oriented approach is used [40,41]. Here, children first visually represented their understanding of AI and what they were going to design. As a post-lesson evaluation, these drawings can be used to analyze children’s development in terms of conceptual understanding. Third, children can also explain the ML process to aid learning. For example, children were asked to explain the AI process to a friend or peer. In this process, the students benefited from both the act of explaining and peer teaching.

3.3. Assessments and Learning Outcomes of the Reported Teaching Units

We identified four types of assessments in the reviewed papers: self-reported evaluation, artifacts examination, process observation, and knowledge acquisition quizzes (Figure 11). These assessments were used to evaluate two dimensions of learning outcomes in general: (1) students’ affective-related development, including motivation, engagement, and attitude; and (2) students’ AI knowledge acquisition. For the first dimension, self-reported evaluation and classroom observations were frequently used. Data were collected through Likert-scale surveys, interviews, lesson field notes, and audio or video recordings of the lessons. We noted the use of both quantitative descriptions or comparisons and qualitative content analysis for the analysis of students’ affective-related learning outcomes. However, in terms of more objective measurements of AI knowledge acquisition (the second dimension), only a few studies (5 of 32) have developed a rubric or assessment sheet for evaluating students’ understanding of what they have learned. Four studies did not conduct any assessment of students’ learning. Most of the teaching units involve qualitative analysis of students’ learning outcomes through the artifacts created by students and the observation of their learning processes, through the use of content analysis. In addition to artifact analysis and process observation, several teaching units (9 of 32) also employ self-reports to assess students’ perceived AI-related competence; in such assessments, students rate questions such as “I understand what machine learning is”. Notably, although 14 studies (9 + 5) included a quantitative assessment of students’ knowledge acquisition, only four of them finally analyzed and presented the results using a quantitative methodology, as mentioned in Section 3.1.
As for learning outcomes, most studies have reported a positive effect of the teaching units on AI-related knowledge acquisition to some extent. Several studies have particularly highlighted the positive effect of the teaching units on students’ affective-related perspectives of engagement [41,46,55,60] and interest [33,37]. In addition, two studies mentioned that students’ general abilities, such as higher-order thinking [55] and collaboration skills [43], are improved. Notably, there are also potential barriers to learning AI, such as students being easily distracted [67] or relatively low motivation [44].

4. Discussion and Future Directions

The current review aimed to (1) map trends in empirical research on AI T&L; (2) characterize the constructs of different pedagogical designs; and (3) provide insights on both the design and implementation of K-12 AI education as well as future research directions in the field. In the following sections, we discuss the main findings of the reviewed studies, some exemplary pedagogical designs, and the limitations of the current review; we also provide suggestions for future research.

4.1. Main Findings

Research trends: Interest in the development of AI education at the K-12 stage has grown tremendously, with a corresponding increase in related empirical studies. Moreover, the increasing number of relevant articles in educational journals suggests that scholars are investigating K-12 AI education critically instead of merely reporting teaching projects. Although most studies have focused on the US, countries worldwide are increasingly serving as the target of related research. Some of these countries have also released national policies to support the development of K-12 AI education. Regarding research methodology, most of the reviewed papers involved pure qualitative or mixed methods, with the principal data sources being classroom observations, interviews, and artifacts examination. Because of their rich details and contexts, qualitative data provide an overview of the diverse application of AI education in K-12, but the conclusions might also typically only apply to a narrow range of circumstances [68]. Only a few studies have quantitatively assessed student learning outcomes, and most have used self-reported perceived competence as the assessment criteria. Future work should endeavor to not merely design a practical teaching project but to adopt a more research-oriented approach by also developing a quantitative assessment method to evaluate the effectiveness of the designed program.
Teaching unit settings: The teaching units reviewed in this paper targeted participants from preschool to secondary school. Most were situated in upper primary and lower secondary school levels, with face-to-face or synchronous online extracurricular courses and workshops being predominant. None of the reviewed studies adopted an asynchronous online approach. We suggest that face-to-face activities or at least real-time online communication are preferable for teaching AI because adolescents tend to experience difficulty in remaining motivated in an unfamiliar or unsupervised online environment [69]. The length of the teaching units varied considerably, being as short as one hour for a single course or as long as six months or more for a long-term course. Half of the teaching units were taught using a one-session course within no more than three hours. The learning content generally focuses on one specific AI concept, mostly ML, and only targets one of the ML methods, such as supervised learning and image/sound/motion recognition. Some studies have argued that providing more lessons would not help much to improve student understanding of the basic concept of AI [48]. However, with short-term courses, a key concern is whether learning only certain parts of AI, such as ML, is sufficient for equipping students with AI literacy, especially when compared with the knowledge that can be gained in long-term courses. Future works on AI T&L could (1) compare one-session lessons with courses consisting of numerous sessions in terms of students’ understanding of AI-related concepts and (2) examine whether teaching AI in an online environment is equivalent to doing so in face-to-face settings, a topic increasingly relevant given the continuing COVID-19 pandemic as of September 2022.
Learning content: The selection of learning content for reviewed teaching units is highly related to the cognitive development of the target audience and the prior knowledge required. In preschool, the teaching units focus on the awareness of AI through playful exploration activities [38,39]. In primary and middle school, the focus is on basic principles of AI approaches, conveyed through experimenting and problem-solving activities. In secondary school, the focus is on core AI concepts and the exploration of advanced ML topics through hands-on artifact construction [48,52]. Only a few teaching units support a cross-age design despite using a long-term curriculum. This finding suggests that the learning content of AI is highly age dependent. Moreover, the prerequisite prior knowledge for the teaching units is also highly related to the selection of learning content and impacted by lesson duration. Most short-term courses require students to have experience in programming languages such as Scratch or Python, whereas long-term teaching units have no such requirement because programming-related knowledge is provided in AI classes. Our review suggests that instructors should focus on not only cognitive development but also prerequisite prior knowledge when choosing AI content knowledge for K-12 learners.
One special consideration in AI education is black boxes in learning content. In this consideration, cognitive development and prerequisite prior knowledge need to be taken into account as well to discuss how to uncover black boxes in AI learning. Black boxes are an unavoidable part of computing education [70]. A major concern is what part of the learning content to black box; that is, determining what students will be exposed to and what they will not learn when exploring specific computing systems [70]. For example, when teaching computational skills such as programming with block-based programming tools such as Scratch, which is very popular among young learners, the various built-in functions such as sensors and motion controls are black boxes to a certain extent. Students can use the functions without knowing how the computer realizes them. However, in general, traditional computer programs are transparent boxes, especially the flow of the execution, changes in variable values, and all operations performed by the program, which all exhibit one-to-one correspondence with the code employed. With such programs, the user can trace, visualize, and inspect the program at any point in any execution step [71]. However, most ML models are not transparent. The weights and parameters of their algorithms (in the case of neural networks and regression models, for example) are not set manually but are trained by feeding large amounts of data to the models. Each input sample results in minor adjustments to these parameters; that is, users are not directly involved in this autonomous process. These weights and parameters can be treated as black boxes in traditional programming. With AI education in the K-12 context, black boxes are unavoidable, with only the extent of black box usage differing by education level. For example, in preschool or early primary school, children can explore ML programs and even train ML models with well-developed and predefined platforms (e.g., Machine Learning for Kids), but the actual mechanisms behind the training remain a black box; this information can be accessed at a later stage (e.g., secondary school). The more accessible the systems are for novice learners, the more actual ML mechanisms need to be hidden from users—and the less students learn about the underlying functioning of the mechanisms. Both project-based learning and design-oriented learning attempt to create a context for students to understand the internal structure of AI. However, such pedagogical approaches aiming to reveal black boxes might face two challenges. One challenge is that the effect of a mismatch between students’ cognitive capability and the degree of exposure to AI has not been examined; thus, the effectiveness of such pedagogical approaches merits further examination. Another consideration for uncovering black boxes is the prerequisite prior knowledge for students to have, such as some programming skills. As more black boxes are uncovered, the requirement for programming skills increases. Block-based programming languages undoubtedly limit students’ insights into the principles and structure of AI systems because they simplify the programming process to hide the complexity of the underlying algorithms and codes [72]. Future work can further investigate to what extent these black boxes should be retained or explicitly explained to reveal the inner processes of AI for the K-12 audience.
Pedagogical design: Various pedagogical approaches have been applied in computing education research and practice [71]. Behaviorism, cognitivism, (social) constructivism, and constructionism learning theories have inspired numerous instructional designs; however, no one design has proven to be ideal for computing education yet [73]. AI education programs in K-12 rely on a broad range of pedagogical approaches owing to their interdisciplinary nature; in the reviewed papers, we observed diverse approaches, most notably direct instruction, interactive learning, collaborative learning, participatory learning, game-based learning, project-based learning, and design-oriented learning. Pedagogical approaches in K-12 AI education tend to be based on constructivism [74], because students are usually guided to construct their knowledge of AI based on their existing experience, thus further exploring the internal structure and principles of AI systems. As a result, although project-based learning is dominant in the introductory sections of many teaching units, the overarching pedagogical approach can be categorized as a hands-on approach with direct instruction. Nevertheless, some most recent studies have adopted a more authentic and natural pedagogical approach. With the widespread application of AI, especially ML, in media-related applications including videos, audio, and various sensors, abundant resources are available for immediate and real-world uses in K-12 education. Any complex situation or phenomenon has the potential to be used for learning AI approaches, as long as a large amount of data is readily available. As suggested by [75], for most real-world problems, accomplishing a given task by collecting data through ML is considerably simpler than accomplishing the same task with traditional programming. The use of real-world situations and problems is a key strength of AI T&L. For example, an increasing number of teaching units are employing inquiry-based learning, design-oriented learning, or experiential learning; these approaches not only emphasize hands-on experience but also encourage students to select and design their own learning content based on their real-world experience.
However, with the dramatic development of educational technologies, especially in fields related to computer science, different learning tools and materials have become crucial factors in the selection or design of pedagogical approaches. For example, the consideration of technological pedagogical knowledge (TPK), which refers to knowledge of adapting teaching by using different technologies and tools, is emerging [76]. In this review, we observed three types of tools: tangible digital tools (e.g., robots and programmable cars), intangible digital tools (e.g., different types of programming environments and well-developed apps), and nondigital tools (e.g., textbooks). We noted rapid development in the preparation of AI teaching materials, but a unified curriculum standard on how to employ these tools to facilitate teaching AI knowledge has been absent. Future research can investigate how to optimize the affordance provided by different tools on K-12 AI pedagogical design.
Assessment and evaluation. Many of the teaching units we reviewed included some assessment methods; however, limited information was available on how the assessments were performed. Most of the evaluations were subjective such as students’ self-reported evaluations, qualitative observations of the lesson process, and examination of the artifacts created. Among them, artifact examination appears to be a valid method for evaluating the extent that learners have mastered AI knowledge. However, in AI or an ML program, the notion of goodness differs somewhat from that in traditional, rule-based computing, where the main epistemological stance is correctness and verifiability. In many cases, an ML solution can at best be “probably approximately correct” [77]. Instead of employing a bivalent view of correctness (the program outputs are either correct or incorrect), the judgment of AI is more complicated and involves reliability and efficiency; such judgment also relies on a contextual, relative, and pragmatic view of goodness, which hampers the accurate evaluation of students’ learning outcomes based on their products. By contrast, assessing AI knowledge through quizzes, as commonly done in other academic subjects, is a more objective means of measuring students’ knowledge acquisition. Notably, only a few of the reviewed studies mentioned this approach, and none of them provided detailed information on the content or construction of their quizzes. This raises the question of what AI literacy is and how to evaluate it. Long and Magerko defined AI literacy as “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” [3]. That is, learning AI is more than simply knowledge acquisition such as understanding the concept of AI, but should involve other factors such as AI dispositions [78]. However, contrary to our findings regarding the relationship between AI and programming, scholars argue that computational literacy—understanding how to program—is not necessarily a prerequisite for learning AI [3]. Future work can further define the skills required to be considered AI literate, especially in the K-12 context, as well as develop quantitative and qualitative criteria for evaluating students’ learning outcomes in terms of content knowledge.

4.2. Selected Exemplary Designs

In this review, we identified numerous teaching units with innovative pedagogical designs in particular aspects. We also found some noteworthy K-12 AI education programs that have not been empirically investigated. In this section, we describe three exemplary designs for each of the major pedagogical design aspects: learning content, pedagogical approaches, and T&L activities. We hope that this discussion inspires similar AI education initiatives in the K-12 context.
Hierarchical content structure: Although most reviewed studies employed a short-term lesson design, longer-term and cross-age curricula are increasingly being developed in some countries. A representative curriculum is the Five Big Ideas developed in the US, a curriculum covering preschool to secondary school students. The learning content is structured in a hierarchical manner, with specific content knowledge being placed and presented within a higher-order framework. Students at all levels receive instruction on the same topics—perception, representation and reasoning, machine learning, natural interaction, and societal impact—with only the depth and breadth of the content being adapted to the various age groups. Another program with a hierarchical content structure is Chiu’s five modules of levels of depth: awareness, knowledge, interaction, empowerment, and ethics [79]. These modules can be categorized into the beginner, intermediate, and advanced levels, allowing flexibility for content selection and providing a path to develop students’ AI techniques and skills. The two hierarchical frameworks put forward potential criteria and structure for the selection of content in such cross-level AI curricula. Students can obtain a comprehensive picture of AI even in the youngest stage, which benefits their constructive understanding of AI early in their AI learning process. Moreover, their ability to apply AI knowledge to solve problems, founded on their constructive understanding of AI technology, continually develops as they advance to the next grade level. However, although these long-term curricula have been proposed by different organizations, few empirical studies have targeted these curricula. More empirical works should thus investigate and evaluate the effectiveness of these curricula.
Design-oriented approach and real-life data sources: The design-oriented approach, where the role of learners is emphasized, is gaining increasing attention in AI education. The two projects of Vartiainen and colleagues illustrated the effect of using the design-oriented approach in AI education [40,41]. Instead of learning with a predesigned project, the learners, with the assistance of their instructors, codesign applications that have immediate use in the real world. Codesigning involves the iterative process of creating ideas, making external representations, redescriptions, and refinements. Such an iterative process promotes a deeper understanding of content domains [80]. The results of the two studies by Vartiainen and colleagues also indicated that the codesigning process helps students to develop their conceptual understanding of ML principles and workflows and how ML can be applied in everyday practice.
The data sources employed in projects have also been shifting from expert-provided data to learner-generated data. Expert-provided data are usually provided by instructors and can take the form of established data sets. Learners can use these data for various operations but are unable to modify the data or create their own data. By contrast, learner-generated data include various types of data obtained in real-life situations through motion capture, gestures, facial expression, and voice recording. Studies have highlighted the benefits of bodily interaction with ML systems [39,49,81]. Vartiainen and colleagues implemented a project with young children (ages 3–9) where the children learned about ML by using their own body movements [39]. The use of the human body and movements—such as facial expressions, gestures, and poses—subsequently training computers to recognize certain elements, and even controlling the computers by defining the features of motions, is an exciting approach to making computing education more attractive. Pedagogical entry points such as embodied learning are extremely popular, especially for young children.
These initiatives underscored the importance of positioning children and adolescent learners as active agents in AI learning, rather than objects of instruction in typical classroom settings. However, most reports on ML programs typically discussed techniques, content knowledge, or curricula, rather than the pedagogical principles and pedagogical elements of these programs, and even few reported that their pedagogical design had any theoretical support. A clear guiding framework that would enable teaching units to effectively combine or use different theories and pedagogies appears to be absent. Future research can, first, examine the expected effectiveness of pedagogical approaches based on different learning theories; second, explore more combinations of pedagogical approaches specific to different learning content; and third, investigate the use of authentic and natural pedagogical designs.
Unplugged activities: role-playing games: As aforementioned, black boxes are inevitable in AI education, especially in the K-12 context. However, we can explicitly uncover some of these black boxes. For instance, with an image recognition ML process, we can understand that the computer identifies the images by coding each pixel, and we can further uncover that the computer groups the features of the pixels through a decision tree or neural network. Unplugged activities are one approach to revealing some black boxes to students. For example, Zhang et al. introduced an activity where students played the role of a machine [34]. In this activity, students form teams and simulate the different steps in manually training an ML model through a decision tree or neural networks. Unplugged activities, combined with a role-playing game where learners play the different roles of an AI system, such as the nodes in a neural network or the decision mechanism of a decision tree, can be used to simulate the learning process of AI, thus clearly illustrating to students how AI learns from data. However, AI black boxes are inevitable in AI. Even with the most advanced engineering, determining exactly how AI trains the data to yield predictions may still be nearly impossible. Future work can investigate the advantages and disadvantages of unplugged activities for students’ understanding of AI.

4.3. Strengths and Limitations

In the current review, we systematically mapped current empirical studies involving AI T&L in K-12 contexts. As an emerging subject area, although numerous teaching projects worldwide are ongoing, empirical research on these projects is still lacking. The current review provides not only insights into the design and implementation of AI education in K-12 but also guidance for future research on the development and evaluation of AI in this context.
We observed numerous resources and nonempirical reports related to teaching AI in K-12. We found 81 papers that included some description of strategies or plans in AI T&L. These papers can potentially contribute to the field by describing or proposing the design of the courses [4,82], developing relevant hardware or software related to AI education [83,84], investigating the teacher’s perspective in AI education [85,86], and introducing the development of an AI curriculum in general, without a detailed illustration of AI T&L [79,87]. Thus, future studies could also reference these nonempirical reports when designing and implementing AI education.

5. Conclusions

This review aids in the mapping of current AI education research designs, AI pedagogical designs, and the outcomes and assessments of teaching AI on the basis of literature on K-12 AI courses or programs. In the reviewed papers, qualitative research approaches were predominant, followed by mixed-method data analysis. Three measurement instruments (interviews, classroom observations, and artifact materials) were most frequently adopted for data collection. The studies mostly used questionnaires to investigate learners’ motivation and attitudes; only a few studies objectively examined the students’ learning outcomes after learning AI. We answered the question of how AI is taught to K-12 students by analyzing nine core components of pedagogical design presented in the reviewed literature.
The AI teaching initiatives of the 32 studies focused on students from preschool to secondary school, with the largest number of projects targeting upper primary school and lower secondary school students. In the reviewed K-12 AI education studies, the number of participants was typically less than 100, with small-scale research being predominant. The teaching units were typically covered in short-term face-to-face courses. Future research can explore the design of a cross-age AI curriculum for students from childhood to adolescence with a longer duration and a larger sample size.
As aforementioned, the content of AI education can be divided into four dimensions; in the reviewed papers, a wide array of AI content knowledge was covered within these four dimensions. First, instruction on the basic concepts of AI is essential to provide students with a firm foundation in AI technology; second, ML is generally adopted as the main knowledge body for teaching how AI works; third, instruction on the ethical and societal impacts of AI can help students grasp the relationship between humans and technology; fourth, experience and exploration enable young children to play with AI-supported techniques, while serving as an introduction for other teaching units. Novel teaching tools such as Machine Learning for Kids [57], Google Teachable Machine [58], and TensorFlow [59] are constantly being developed and are instrumental in teaching AI to children. Prior knowledge might even not be required if the teaching unit is long-term and includes instruction on programming skills. However, some of the mechanisms underlying AI technology tend to be hidden, representing black boxes. Future research can examine the extent to which the inner mechanism of AI needs to be hidden.
Constructionism, which is an important learning theory underpinning various contemporary learner-centered pedagogies (e.g., project-based learning, problem-based learning, inquiry-based learning, etc.) adopted in K-12 classrooms [88], can have a large influence on instructional practices on AI learning and teaching [89]. Notably, direct instruction was also combined with the aforementioned constructivist teaching strategies to maximize the learning effects. Pedagogical approaches based on social constructivism, such as participatory learning, also proved useful for helping students improve their understanding of AI. One instructional method, design-oriented learning, was noteworthy in the reviewed papers because it provides students with greater freedom to design their own projects based on their interests and experience. Researchers applying the design-oriented approach in future K-12 AI courses should endeavor to collect evidence on its effectiveness. Moreover, learner-generated data can be used to maximize the opportunities students have to design AI projects. With consideration of the aforementioned matters, future researchers can establish a comprehensive guiding framework for K-12 AI teaching practice.
Our findings suggest that most AI teaching units have a positive effect on students in terms of their understanding of AI concepts, learning attitude, and interest in AI. However, a few studies have developed a rubric to investigate students’ AI knowledge acquisition. AI literacy is increasingly being regarded as a core competency to prepare students to be AI designers, not AI consumers, in the future. Further research on this topic should shift from measuring only the affective dimension of students’ AI learning to developing a multidimensional framework to assess students’ AI literacy by using quantitative and qualitative methods. We hope that this research contributes valuable knowledge on a rapidly developing and crucial field, which is valuable both from a practical and research-oriented perspective.

Author Contributions

Conceptualization, methodology, formal analysis, data curation, writing—original draft preparation, M.Y.; writing—review and editing, M.S.-Y.J., M.Y. and Y.D.; validation, resources, supervision, M.S.-Y.J. and Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated and analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Authors and YearCountry/
Region
Age LevelLesson DurationPedagogical ApproachAssessment
Henry et al. (2021) [9]BelgiumMiddle school + primary school1 Session;
2–4 h
Role-playing game: children alternate between the roles of developers, testers, and AIOpen questions: personal definition of AI
Van Brummelen et al. (2021) [47]United KingdomGrade 6–12 (middle school)5 Sessions;
13–15 h
Direct instructionQuestionnaire
Students’ artifacts
Vartiainen et al. (2021) [44]FinlandAge 12–133 Sessions;
8–9 h
Design-oriented pedagogy
Collaborative learning
Products;
Process observation
Bilstrup et al. (2020) [56]DenmarkAge 16–201 Session:
~3 h
Design as a learning approachArtifact analysis
Lin et al. (2020) [60]USAAge 8–101 Session;
~2 h
Interactive learningFive-item open question assessment;
Self-evaluation questionnaire
Norouzi et al. (2020) [53]USASecondary school students1 Month;
~80 h
Collaborative learning
Instruction transitioned
away from objectivist (basic knowledge) strategies to constructivist strategies (project)
Questionnaire for knowledge acquisition;
Questionnaire for self-evaluation;
Vartiainen et al. (2020) [41]FinlandAge 12–133 Sessions;
8–9 h
Design-oriented pedagogy emphasizes open-ended, real-life learning tasksProducts Process observation;
Wan et al. (2020) [48]USAAge 15–171 Session;
~3 h
Design space involves data visualization; hands-on exploration; collaborative learningQuestionnaire for knowledge acquisition
Toivonen et al. (2019) [42]FinlandAge 12–133 Sessions;
8–9 h
Meta design approach: children as designers and creators in the evolving process of learning
Project-based learning
Tests; group discussion; artifacts; interviews
Hitron et al. (2019) [49]IsraelAge 10–131 SessionLearning by design approach
Experience
Predefined structured support
Observation;
Interview
Mariescu-Istodor & Jormanainen (2019) [33]RomaniaAge 13–191 Session;
~2 h
Collaborative learningQuestionnaire (motivation)
Self-assessment (perceived competence)
Estevez et al. (2019) [52]SpainAge 16–171 Session;
~2 h
Direct instruction
Hands-on practice
Collaborative learning
N/A
Williams et al. (2019a) [38]USAAge 4–61 Session;
~2 h (designed)
2–4 days in total
Interactive learning
Collaborative learning
Perception of robots questionnaire;
Theory of mind assessment
Williams et al. (2019b) [8]USAAge 4–61 Session;
~2 h (designed)
2–4 days in total
Interactive learning
Collaborative learning
Multiple-choice questions for AI knowledge
Druga et al. (2019) [50]USA, Germany, Denmark, SwedenAge 7–9;
Age 10–12
1 Session
~2 h
Interactive learningAI perception questionnaire
Hitron et al. (2018) [51]IsraelAge 10–121 SessionInteractive learningArtifact analysis
Sakulkueakulsuk et al. (2018) [55]ThailandGrade 7–9 (middle school)3 sessions;
9 h
Participatory learning
Four Ps of Creative Learning (Projects, Passion, Play, and Peers)
PBL, GBL, CL
AI: product evaluation;
Other: self-report survey (learning experiences and the adoption of
new learning and thinking processes)
Woodward et al. (2018) [66]USAge 7–124 sessions;
6–8 h
Cooperative inquiry
Codesign
N/A
Srikant & Aggarwal (2017) [37]IndiaAge 10–151 SessionDirect instruction
Hands-on practice
Cognitive-based task design
N/A
Burgsteine et al. (2016) [54]EuroGrade 9–11 (secondary school)7 Sessions:
14 h
Theoretical and
hands-on components;
Group work
Self-assessment questionnaire
Vartiainen et al. (2020) [39]FinlandAge 3–9~1 hParticipatory learningN/A
Druga & Ko (2021) [90]USAAge 7–12N/MProject-based learningObservation;
Questionnaire (for perception)
Tseng et al., (2021) [45]USA & JapanAge 8–14~2 hDirect instruction
Project-based learning
Design-oriented learning
Survey about knowledge of ML
Shamir (2021) [46]IsraelAge 126-Day courseParticipatory learning
Interactive learning
Artifact analysis
Course questionnaire (multiple choice)
Zhang et al. (2022) [34]N/MGrade 7–9 (middle school)>25 hInteractive learning
Collaborative learning
Participatory learning
AI concept inventory (Good example)
Hsu et al. (2022) [67]N/MGrade 7 (middle school)6-Week curriculumExperiential learning (interactive learning) vs. cycle of doing projects (direct instruction)Course questionnaire (multiple choice)
Lee et al. (2021) [65]N/MAge 8–11N/MGame-based learning
Problem-based learning
Collaborative learning
Pre/post assessment of AI concepts, ethics, life science
Kaspersen et al. (2021) [43]DenmarkAge 17–206 interventions (Sessions)
~10 h
Project-based learning
Collaborative learning
Participatory learning (social science)
Observation
Fernandez-Martinez et al. (2021) [44]SpainGrade 8/ Grade 102 Sessions: 3–4 hIndividual work
Direct instruction
Interactive learning
Quiz with open and multiple-choice questions
Melsion et al. (2021) [36]N/MAge 10–14<30 minDirect instruction
Interactive learning
Questions evaluating understanding of ML and bias: multiple-choice, open-ended, Likert scale
Ng et al. (2022) [35]Hong KongPrimary school students7 Session + self-create workshopDigital story writing
Inquiry-based learning: five phases (orientation, conceptualization, investigation, conclusion, and discussion)
Posttest about AI knowledge
Hsu et al. (2021) [64]TaiwanGrade 59 Sessions (9 weeks)Game-based learning
Learning in making (Robots)
Experiential learning
Learning effectiveness test

References

  1. Jennings, C. Artificial Intelligence: Rise of the Lightspeed Learners; Rowman & Littlefield: Lanham, MD, USA, 2019. [Google Scholar]
  2. Trustworthy Artificial Intelligence (AI) in Education: Promises and Challenges. Available online: https://www.oecd.org/education/trustworthy-artificial-intelligence-ai-in-education-a6c90fa9-en.htm (accessed on 25 October 2022).
  3. Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 21 April 2020; pp. 1–16. [Google Scholar]
  4. Ho, J.W.; Scadding, M.; Kong, S.C.; Andone, D.; Biswas, G.; Hoppe, H.U.; Hsu, T.C. Classroom Activities for Teaching Artificial Intelligence to Primary School Students. In Proceedings of the International Conference on Computational Thinking Education, Hong Kong, China, 12–15 June 2019; pp. 157–159. [Google Scholar]
  5. Kandlhofer, M.; Steinbauer, G.; Hirschmugl-Gaisch, S.; Huber, P. Artificial Intelligence and Computer Science in Education: From Kindergarten to University. In Proceedings of the IEEE Frontiers in Education Conference (FIE), Eire, PA, USA, 12–15 October 2016; pp. 1–9. [Google Scholar]
  6. Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000366994 (accessed on 25 October 2022).
  7. Su, J.; Yang, W. Artificial Intelligence in early childhood education: A scoping review. Comput. Educ. Artif. Intell. 2022, 3, 100049. [Google Scholar] [CrossRef]
  8. Williams, R.; Park, H.W.; Oh, L.; Breazeal, C. Popbots: Designing an Artificial Intelligence Curriculum for Early Childhood Education. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 9729–9736. [Google Scholar]
  9. Henry, J.; Hernalesteen, A.; Collard, A.-S. Teaching artificial intelligence to K-12 through a role-playing game questioning the Intelligence Concept. KI-Künstliche Intell. 2021, 35, 171–179. [Google Scholar] [CrossRef]
  10. Piaget, J. Intellectual evolution from adolescence to adulthood. Hum. Dev. 1972, 15, 1–12. [Google Scholar] [CrossRef]
  11. Vygotsky, L. Interaction between learning and development. Read. Dev. Child. 1978, 23, 34–41. [Google Scholar]
  12. Spector, J.M.; Anderson, T.M. Integrated and Holistic Perspectives on Learning, Instruction and Technology; Kluwer Academic Publishers: Boston, MA, USA, 2000. [Google Scholar]
  13. Bahar, M. Misconceptions in biology education and conceptual change strategies. Educ. Sci. Theory Pract. 2003, 3, 55–64. [Google Scholar]
  14. D’Silva, I. Active learning. J. Educ. Adm. Policy Stud. 2010, 6, 77–82. [Google Scholar]
  15. Makgato, M. Identifying constructivist methodologies and pedagogic content knowledge in the teaching and learning of Technology. Procedia-Soc. Behav. Sci. 2012, 47, 1398–1402. [Google Scholar] [CrossRef] [Green Version]
  16. Papert, S.; Solomon, C.; Soloway, E.; Spohrer, J.C. Twenty Things to do with a Computer. In Studying the Novice Programmer; Psychology Press: Hove, UK, 1971; pp. 3–28. [Google Scholar]
  17. Marques, L.S.; Gresse von Wangenheim, C.; Hauck, J.C. Teaching machine learning in school: A systematic mapping of the state of the art. Inform. Educ. 2020, 19, 283–321. [Google Scholar] [CrossRef]
  18. Touretzky, D.; Gardner-McCune, C.; Martin, F.; Seehorn, D. Envisioning AI for K-12: What Should Every Child Know About AI? In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 17 July 2019; pp. 9795–9799. [Google Scholar]
  19. Zhou, X.; Van Brummelen, J.; Lin, P. Designing AI learning experiences for K-12: Emerging works, future opportunities and a design framework. arXiv 2020, arXiv:2009.10228. [Google Scholar]
  20. Sanusi, I.T.; Oyelere, S.S. Pedagogies of Machine Learning in K-12 Context. In Proceedings of the 2020 IEEE Frontiers in Education Conference (FIE), Uppsala, Sweden, 21–24 October 2020; pp. 1–8. [Google Scholar]
  21. Gresse von Wangenheim, C.; Hauck, J.C.; Pacheco, F.S.; Bertonceli Bueno, M.F. Visual tools for teaching machine learning in K-12: A ten-year systematic mapping. Educ. Inf. Technol. 2021, 26, 5733–5778. [Google Scholar] [CrossRef]
  22. Su, J.; Zhong, Y.; Ng, D.T. A meta-review of literature on educational approaches for teaching AI at the K-12 levels in the Asia-Pacific region. Comput. Educ. Artif. Intell. 2022, 3, 100065. [Google Scholar] [CrossRef]
  23. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The Prisma statement. PLoS Med. 2009, 151, 264–269. [Google Scholar] [CrossRef] [Green Version]
  24. Hill, H.C.; Rowan, B.; Ball, D.L. Effects of teachers’ mathematical knowledge for teaching on student achievement. Am. Educ. Res. J. 2005, 42, 371–406. [Google Scholar] [CrossRef] [Green Version]
  25. Branch, R.M. Instructional Design: The Addie Approach; Springer Science & Business Media: New York, NY, USA, 2009. [Google Scholar]
  26. Gerlach, V.S.; Ely, D.P. Teaching & Media: A Systematic Approach, 2nd ed.; Prentice-Hall Incorporated: Englewood Cliffs, NJ, USA, 1980. [Google Scholar]
  27. Susilana, R.; Hutagalung, F.; Sutisna, M.R. Students’ perceptions toward online learning in higher education in Indonesia during COVID-19 pandemic. Elem. Educ. Online 2020, 19, 9–19. [Google Scholar]
  28. Yusnita, I.; Maskur, R.; Suherman, S. Modifikasi model Pembelajaran Gerlach Dan Ely Melalui Integrasi Nilai-Nilai Keislaman Sebagai upaya meningkatkan Kemampuan Representasi matematis. Al-Jabar J. Pendidik. Mat. 2016, 7, 29–38. [Google Scholar] [CrossRef]
  29. Setiawati, R.; Netriwati, N.; Nasution, S.P. DESAIN model Pembelajaran Gerlach Dan Ely Yang Berciri nilai-nilai ke-islaman Untuk Meningkatkan Kemampuan Komunikasi Matematis. AKSIOMA J. Program Studi Pendidik. Mat. 2018, 7, 371. [Google Scholar] [CrossRef]
  30. Surur, A.M. Gerlach and Ely’s learning model: How to implement it to online learning for statistics course. Edumatika J. Ris. Pendidik. Mat. 2022, 4, 174–188. [Google Scholar] [CrossRef]
  31. Tuckett, A.G. Applying thematic analysis theory to practice: A researcher’s experience. Contemp. Nurse 2005, 19, 75–87. [Google Scholar] [CrossRef]
  32. Dai, Y.; Lu, S.; Liu, A. Student pathways to understanding the Global Virtual Teams: An Ethnographic Study. Interact. Learn. Environ. 2019, 27, 3–14. [Google Scholar] [CrossRef]
  33. Mariescu-Istodor, R.; Jormanainen, I. Machine Learning for High School Students. In Proceedings of the 19th Koli Calling International Conference on Computing Education Research, New York, NY, USA, 21–24 November 2019; pp. 1–9. [Google Scholar]
  34. Zhang, H.; Lee, I.; Ali, S.; DiPaola, D.; Cheng, Y.; Breazeal, C. Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory study. Int. J. Artif. Intell. Educ. 2022, 1, 1–35. [Google Scholar] [CrossRef]
  35. Ng, D.T.; Luo, W.; Chan, H.M.; Chu, S.K. Using digital story writing as a pedagogy to develop AI literacy among primary students. Comput. Educ. Artif. Intell. 2022, 3, 100054. [Google Scholar] [CrossRef]
  36. Melsión, G.I.; Torre, I.; Vidal, E.; Leite, I. Using explainability to help children understand gender bias in ai. In Proceedings of the Interaction Design and Children, Athens, Greece, 24–30 June 2021; pp. 87–99. [Google Scholar]
  37. Srikant, S.; Aggarwal, V. Introducing Data Science to School Kids. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education, Seattle, WA, USA, 8–11 March 2017; pp. 561–566. [Google Scholar]
  38. Williams, R.; Park, H.W.; Breazeal, C. A is for Artificial Intelligence. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–11. [Google Scholar]
  39. Vartiainen, H.; Tedre, M.; Valtonen, T. Learning machine learning with very young children: Who is teaching whom? Int. J. Child-Comput. Interact. 2020, 25, 100182. [Google Scholar] [CrossRef]
  40. Vartiainen, H.; Toivonen, T.; Jormanainen, I.; Kahila, J.; Tedre, M.; Valtonen, T. Machine learning for middle schoolers: Learning through data-driven design. Int. J. Child-Comput. Interact. 2021, 29, 100281. [Google Scholar] [CrossRef]
  41. Vartiainen, H.; Toivonen, T.; Jormanainen, I.; Kahila, J.; Tedre, M.; Valtonen, T. Machine Learning for Middle-Schoolers: Children as Designers of Machine-Learning Apps. In Proceedings of the 2020 IEEE Frontiers in Education Conference (FIE), Uppsala, Sweden, 21–24 October 2020; pp. 1–9. [Google Scholar]
  42. Toivonen, T.; Jormanainen, I.; Kahila, J.; Tedre, M.; Valtonen, T.; Vartiainen, H. Co-designing Machine Learning Apps in K-12 with Primary School Children. In Proceedings of the IEEE 20th International Conference on Advanced Learning Technologies (ICALT), Tartu, Estonia, 6–9 July 2020; pp. 308–310. [Google Scholar]
  43. Kaspersen, M.H.; Bilstrup, K.-E.K.; Van Mechelen, M.; Hjorth, A.; Bouvin, N.O.; Petersen, M.G. VotestratesML: A High School Learning Tool for Exploring Machine Learning and its Societal Implications. In Proceedings of the FabLearn Europe/MakeEd 2021-An International Conference on Computing, Design and Making in Education, St. Gallen, Switzerland, 2–3 June 2021; pp. 1–10. [Google Scholar]
  44. Fernández-Martínez, C.; Hernán-Losada, I.; Fernández, A. Early introduction of AI in Spanish Middle Schools. A motivational study. KI-Künstliche Intell. 2021, 35, 163–170. [Google Scholar] [CrossRef]
  45. Tseng, T.; Murai, Y.; Freed, N.; Gelosi, D.; Ta, T.D.; Kawahara, Y. PlushPal: Storytelling with Interactive Plush Toys and Machine Learning. In Proceedings of the Interaction Design and Children, Athens, Greece, 24–30 June 2021; pp. 236–245. [Google Scholar]
  46. Shamir, G.; Levin, I. Neural network construction practices in elementary school. KI-Künstliche Intell. 2021, 35, 181–189. [Google Scholar] [CrossRef]
  47. Van Brummelen, J.; Heng, T.; Tabunshchyk, V. Teaching Tech to Talk: K-12 Conversational Artificial Intelligence Literacy Curriculum and Development Tools. In Proceedings of the AAAI Conference on Artificial Intelligence 2021, Online, 2–9 February 2021; Volume 35, pp. 15655–15663. [Google Scholar]
  48. Wan, X.; Zhou, X.; Ye, Z.; Mortensen, C.K.; Bai, Z. Smileycluster: Supporting accessible machine learning in K-12 scientific discovery. In Proceedings of the Interaction Design and Children Conference, London, UK, 21–24 June 2020; pp. 23–35. [Google Scholar]
  49. Hitron, T.; Orlev, Y.; Wald, I.; Shamir, A.; Erel, H.; Zuckerman, O. Can Children Understand Machine Learning Concepts? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 2 May 2019; pp. 1–11. [Google Scholar]
  50. Druga, S.; Vu, S.T.; Likhith, E.; Qiu, T. Inclusive AI Literacy for Kids Around the World. In Proceedings of the FabLearn 2019, New York, NY, USA, 9–10 March 2019; pp. 104–111. [Google Scholar]
  51. Hitron, T.; Wald, I.; Erel, H.; Zuckerman, O. Introducing Children to Machine Learning Concepts through Hands-On Experience. In Proceedings of the 17th ACM Conference on Interaction Design and Children, Trondheim, Norway, 19–22 June 2018; pp. 563–568. [Google Scholar]
  52. Estevez, J.; Garate, G.; Grana, M. Gentle introduction to artificial intelligence for high-school students using scratch. IEEE Access 2019, 7, 179027–179036. [Google Scholar] [CrossRef]
  53. Norouzi, N.; Chaturvedi, S.; Rutledge, M. Lessons Learned from Teaching Machine Learning and Natural Language Processing to High School Students. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 3 April 2020; Volume 34, pp. 13397–13403. [Google Scholar]
  54. Burgsteiner, H.; Kandlhofer, M.; Steinbauer, G. Irobot: Teaching the Basics of Artificial Intelligence in High Schools. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AR, USA, 5 May 2016; Volume 30. [Google Scholar]
  55. Sakulkueakulsuk, B.; Witoon, S.; Ngarmkajornwiwat, P.; Pataranutaporn, P.; Surareungchai, W.; Pataranutaporn, P.; Subsoontorn, P. Kids making AI: Integrating Machine Learning, Gamification, and Social Context in STEM Education. In Proceedings of the 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), Wollongong, Australia, 4–7 December 2018; pp. 1005–1010. [Google Scholar]
  56. Bilstrup, K.-E.K.; Kaspersen, M.H.; Petersen, M.G. Staging Reflections on Ethical Dilemmas in Machine Learning. In Proceedings of the 2020 ACM Designing Interactive Systems Conference, Eindhoven, The Netherlands, 3 July 2020; pp. 1211–1222. [Google Scholar]
  57. Lane, D. Machine Learning for Kids: A Project-Based Introduction to Artificial Intelligence; No Starch Press: San Francisco, CA, USA, 2021. [Google Scholar]
  58. Carney, M.; Webster, B.; Alvarado, I.; Phillips, K.; Howell, N.; Griffith, J.; Jongejan, J.; Pitaru, A.; Chen, A. Teachable Machine: Approachable Web-Based Tool for Exploring Machine Learning Classification. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25 April 2020; pp. 1–8. [Google Scholar]
  59. Abadi, M. Tensorflow: Learning Functions at Scale. In Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming, Nara, Japan, 4 September 2016; p. 1. [Google Scholar]
  60. Lin, P.; Van Brummelen, J.; Lukin, G.; Williams, R.; Breazeal, C. Zhorai: Designing a Conversational Agent for Children to Explore Machine Learning Concepts. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 3 April 2020; Volume 34, pp. 13381–13388. [Google Scholar]
  61. Krapfl, J.E. Behaviorism and society. Behav. Anal. 2016, 39, 123–129. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Abramson, C. Problems of teaching the behaviorist perspective in the Cognitive Revolution. Behav. Sci. 2013, 3, 55–71. [Google Scholar] [CrossRef] [PubMed]
  63. Mandler, G. Origins of the cognitive (r)evolution. J. Hist. Behav. Sci. 2002, 38, 339–353. [Google Scholar] [CrossRef] [PubMed]
  64. Hsu, T.-C.; Abelson, H.; Lao, N.; Tseng, Y.-H.; Lin, Y.-T. Behavioral-pattern exploration and development of an instructional tool for young children to learn AI. Comput. Educ. Artif. Intell. 2021, 2, 100012. [Google Scholar] [CrossRef]
  65. Lee, S.; Mott, B.; Ottenbreit-Leftwich, A.; Scribner, A.; Taylor, S.; Park, K.; Rowe, J.; Glazewski, K.; Hmelo-Silver, C.E.; Lester, J. AI-Infused Collaborative Inquiry in Upper Elementary School: A Game-Based Learning Approach. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 18 May 2021; Volume 35, pp. 15591–15599. [Google Scholar]
  66. Woodward, J.; McFadden, Z.; Shiver, N.; Ben-hayon, A.; Yip, J.C.; Anthony, L. Using Co-design to Examine How Children Conceptualize Intelligent Interfaces. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, Canada, 21–26 April 2018; pp. 1–14. [Google Scholar]
  67. Hsu, T.C.; Abelson, H.; Van Brummelen, J. The effects on secondary school students of applying experiential learning to the Conversational AI Learning Curriculum. Int. Rev. Res. Open Distrib. Learn. 2022, 23, 82–103. [Google Scholar] [CrossRef]
  68. Libarkin, J.C.; Kurdziel, J.P. Research methodologies in science education: The Qualitative-Quantitative Debate. J. Geosci. Educ. 2002, 50, 78–86. [Google Scholar] [CrossRef]
  69. Global Education Monitoring Report 2020: Inclusion and Education-All Means All; United Nations Educational, Scientific and Cultural Organization: Paris, France, 2021; p. S.l.
  70. Resnick, M.; Silverman, B. Some Reflections on Designing Construction Kits for Kids. In Proceeding of the 2005 Conference on Interaction Design and Children-IDC, Boulder, CO, USA, 8–10 June 2005; pp. 117–122. [Google Scholar]
  71. Sorva, J. Visual Program Simulation in Introductory Programming Education; Aalto Univ. School of Science: Espoo, Finland, 2012. [Google Scholar]
  72. Steinbauer, G.; Kandlhofer, M.; Chklovski, T.; Heintz, F.; Koenig, S. A differentiated discussion about AI education K-12. KI-Künstliche Intell. 2021, 35, 131–137. [Google Scholar] [CrossRef] [PubMed]
  73. Guzdial, M. Learner-Centered Computing Education for Computer Science Majors. In Synthesis Lectures on Human-Centered Informatics; Springer: New York, NY, USA, 2016; pp. 83–94. [Google Scholar]
  74. Papert, S. Mindstorms: Children, Computers, and Powerful Ideas; Harvester Press: Brighton, UK, 1980. [Google Scholar]
  75. Palesuvaran, P.; Karpathy, A. (Tesla): CVPR 2021 Workshop on Autonomous Vehicles. Available online: https://pharath.github.io/self%20driving/Karpathy-CVPR-2021/ (accessed on 26 October 2022).
  76. Koehler, M.J.; Mishra, P. What Is Technological Pedagogical Content Knowledge? Contemporary Issues in Technology and Teacher Education; Society for Information Technology & Teacher Education: Waynesville, NC USA, 2009; Volume 9. [Google Scholar]
  77. Valiant, L. Probably Approximately Correct Nature’s Algorithms for Learning and Prospering in a Complex World; Basic Books, A Member of the Perseus Books Group: New York, NY, USA, 2014. [Google Scholar]
  78. Jong, M.S.-Y.; Geng, J.; Chai, C.S.; Lin, P.-Y. Development and predictive validity of the Computational Thinking Disposition Questionnaire. Sustainability 2020, 12, 4459. [Google Scholar] [CrossRef]
  79. Chiu, T.K.; Meng, H.; Chai, C.-S.; King, I.; Wong, S.; Yam, Y. Creation and evaluation of a pretertiary Artificial Intelligence (AI) curriculum. IEEE Trans. Educ. 2022, 65, 30–39. [Google Scholar] [CrossRef]
  80. Lehrer, R.; Pritchard, C. Symbolizing Space into Being. Symbolizing, Modeling and Tool Use in Mathematics Education; Springer: New York, NY, USA, 2002; pp. 59–86. [Google Scholar]
  81. Druga, S. Growing Up with AI: Cognimates: From Coding to Teaching Machines. Doctoral Dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 2018. [Google Scholar]
  82. Evangelista, I.; Blesio, G.; Benatti, E. Why Are We Not Teaching Machine Learning at High School? A Proposal. In Proceedings of the 2018 World Engineering Education Forum-Global Engineering Deans Council (WEEF-GEDC), Albuquerque, NM, USA, 12 November 2018; pp. 1–6. [Google Scholar]
  83. Reyes, A.A.; Elkin, C.; Niyaz, Q.; Yang, X.; Paheding, S.; Devabhaktuni, V.K. A Preliminary Work on Visualization-Based Education Tool for High School Machine Learning Education. In Proceedings of the 2020 IEEE Integrated STEM Education Conference (ISEC), Online, 1 August 2020; pp. 1–5. [Google Scholar]
  84. Gong, X.; Wu, Y.; Ye, Z.; Liu, X. Artificial Intelligence Course Design: Istream-based Visual Cognitive Smart Vehicles. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26 June 2018; pp. 1731–1735. [Google Scholar]
  85. Chiu, T.K. A holistic approach to the design of Artificial Intelligence (AI) education for K-12 schools. TechTrends 2021, 65, 796–807. [Google Scholar] [CrossRef]
  86. Zhang, X.; Tlili, A.; Shubeck, K.; Hu, X.; Huang, R.; Zhu, L. Teachers’ adoption of an open and interactive e-book for teaching K-12 students artificial intelligence: A mixed methods inquiry. Smart Learn. Environ. 2021, 8, 34. [Google Scholar] [CrossRef]
  87. Sabuncuoglu, A. Designing One Year Curriculum to Teach Artificial Intelligence for Middle School. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, Trondheim, Norway, 15–19 June 2020; pp. 96–102. [Google Scholar]
  88. Jong, M.S.Y.; Shang, J.J.; Lee, F.L.; Lee, J.H.M.; Law, H.Y. Learning online: A comparative study of a game-based situated learning approach and a traditional web-based learning approach. In Lecture Notes in Computer Science: Technologies for e-Learning and Digital Entertainment; Pan, Z., Aylett, R., Diener, H., Jin, X., Gobel, S., Li, L., Eds.; Springer: New York, NY, USA, 2006; Volume 3942, pp. 541–551. [Google Scholar]
  89. Li, X.; Jiang, M.C.Y.; Jong, M.S.Y.; Zhang, X.; Chai, C.S. Understanding medical students’ perceptions of and behavioral intentions toward learning artificial intelligence. Int. J. Environ. Res. Public Health 2022, 17, 8733. [Google Scholar] [CrossRef] [PubMed]
  90. Druga, S.; Ko, A.J. How do children’s perceptions of machine intelligence change when training and coding smart programs? In Proceedings of the IDC ′21: Interaction Design and Children, Athens, Greece, 24–30 June 2021; pp. 49–61. [Google Scholar]
Figure 1. PRISMA flowchart of the study selection process.
Figure 1. PRISMA flowchart of the study selection process.
Sustainability 14 15620 g001
Figure 2. Gerlach and Ely Design Model.
Figure 2. Gerlach and Ely Design Model.
Sustainability 14 15620 g002
Figure 3. Publications and types by year.
Figure 3. Publications and types by year.
Sustainability 14 15620 g003
Figure 4. Related studies in countries/regions.
Figure 4. Related studies in countries/regions.
Sustainability 14 15620 g004
Figure 5. Research types in the field of teaching AI to K-12 audience.
Figure 5. Research types in the field of teaching AI to K-12 audience.
Sustainability 14 15620 g005
Figure 6. School levels of K-12 AI teaching units.
Figure 6. School levels of K-12 AI teaching units.
Sustainability 14 15620 g006
Figure 7. Lesson setting of the analyzed teaching units.
Figure 7. Lesson setting of the analyzed teaching units.
Sustainability 14 15620 g007
Figure 8. Lesson duration of the analyzed teaching units.
Figure 8. Lesson duration of the analyzed teaching units.
Sustainability 14 15620 g008
Figure 9. Relationship between learning theories and pedagogical approaches.
Figure 9. Relationship between learning theories and pedagogical approaches.
Sustainability 14 15620 g009
Figure 10. Pedagogical approaches adopted in K-12 AI teaching units.
Figure 10. Pedagogical approaches adopted in K-12 AI teaching units.
Sustainability 14 15620 g010
Figure 11. Assessment methods of K-12 AI teaching units.
Figure 11. Assessment methods of K-12 AI teaching units.
Sustainability 14 15620 g011
Table 1. Search terms.
Table 1. Search terms.
AI TermsEducation TermsSchool Level Terms
machine learning OR artificial intelligence OR deep learning OR neural network OR AIteaching OR learning OR education OR curriculum OR curricula OR pedagog OR instructK-12 OR kid OR child OR primary OR elementary OR secondary OR middle OR high school OR pretertiary
Table 2. The pedagogical design and analysis framework for the current study.
Table 2. The pedagogical design and analysis framework for the current study.
Model Used in Current StudyGerlach and Ely’s Model
Learning theory Pedagogical approachDetermination of strategy
Special T&L activity
Learning contentSpecification of content
ScaleTarget audienceOrganization of groups
Course durationAllocation of time
SettingAllocation of space
Teaching resourcesSelection of resources
Prior knowledge prerequisitesAssessment of entering behaviors
Aims and objectivesSpecification of objectives
Assessment and learning outcomeEvaluation of performance
Analysis of feedback
Table 3. Coding schema of the current study.
Table 3. Coding schema of the current study.
CategoriesCode ListExample
General informationAuthor, title, year, country
Publication typeConference/Journal
Project/course name (if any)AI4Future
Research informationRQs
Target audienceSecondary school
Sample size
Type of researchQualitative
Data sourceSurvey
Pedagogical designAims and objectives
T&L settingClassroom
Project/course durationOne day (3 h)
Learning content
Pedagogical approachPBL
TheoriesConstructionism
Prior knowledge prerequisitesScratch
Special T&L activitiesUnplugged
Materials and toolsRobot
Evaluation methodSelf-evaluation
Learning outcome
Table 4. Brief description and implications of the main learning theories.
Table 4. Brief description and implications of the main learning theories.
TheoryDescriptionImplications
BehaviorismThis theory focuses solely on observable behavior, with the sense that actions are shaped by environmental stimuli [61]
Discounts mental activities such as cognition and emotion, which are regarded as too subjective [62].
Direct instruction is prioritized.
Feedback is provided on answers and quizzes
CognitivismThis theory emphasizes the process and storage of information in the human brain [63]. Popular guiding theories in education include information process theory, cognitive load theory, and metacognitive learning theory. Relevant learning strategies include outcome prediction, research step planning, time management, decision-making, and alternate strategy use when a search fails.Design of lessons and materials is based on communicative language teaching.
The different mental processes of novice and expert problem-solving are discussed.
(Social)
Constructivism
This theory focuses on learners constructing their own understanding, including rules and mental models, of new knowledge or phenomenon by activating and reflecting on their prior knowledge.Active learning is prioritized.
Learning is enhanced through social interaction.
Authentic and real-world problems are employed.
ConstructionismThis theory, based on constructivism, holds that learning is most effective when people actively construct tangible objects in the real world.Project-based learning is prioritized.
Students learn by doing (making).
Artifacts are constructed.
Table 5. Descriptions of the pedagogical approaches applied in AI education.
Table 5. Descriptions of the pedagogical approaches applied in AI education.
Pedagogical ApproachDescription in the Context of Teaching AI
Direct instructionTeachers present the target knowledge through lectures, videos, and demonstrations.
Hands-on activity onlyStudents experience or explore tools and materials but are not involved in the construction of them.
Interactive learningStudents engage in part of the construction of the AI or ML process, but they cannot necessarily define their own projects or problems.
Collaborative learningStudents conduct group work or paired work
Inquiry-based learningStudents set their own learning goal, ask their own questions, and attempt to solve problems. However, they do not necessarily actually construct artifacts or products.
Game-based learningStudents learn through educational games.
Participatory learningStudents interact with their peers and experiment with different roles.
Project-based learningStudents learn by participating in the development of a project, typically involving artifact construction with the objective of solving a real-life problem.
Design-oriented learningStudents focus on the design element, with open-ended problems; children design their own projects instead of being assigned problems or projects.
Experiential learningStudents experience, reflect, think, and act in the learning process.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yue, M.; Jong, M.S.-Y.; Dai, Y. Pedagogical Design of K-12 Artificial Intelligence Education: A Systematic Review. Sustainability 2022, 14, 15620. https://doi.org/10.3390/su142315620

AMA Style

Yue M, Jong MS-Y, Dai Y. Pedagogical Design of K-12 Artificial Intelligence Education: A Systematic Review. Sustainability. 2022; 14(23):15620. https://doi.org/10.3390/su142315620

Chicago/Turabian Style

Yue, Miao, Morris Siu-Yung Jong, and Yun Dai. 2022. "Pedagogical Design of K-12 Artificial Intelligence Education: A Systematic Review" Sustainability 14, no. 23: 15620. https://doi.org/10.3390/su142315620

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop