1. Introduction
Today’s students are facing massive challenges because new bodies of knowledge have been created due to the rapid development of technology. Artificial intelligence (AI) is an emerging field that has transformed modern society [
1,
2]. It has begun to redefine the knowledge and skills required by students to live well and work productively [
3]. As stated by the United Nations in their 2030 agenda, quality education is a fundamental method of providing individuals with equal learning opportunities to support their sustainable development [
4]. AI is crucial for sustainable economic growth and it is currently applied in fields such as green building, smart agriculture, medical diagnosis and modeling of disease spread, etc., all of which are related to the sustainability of human well-being [
5]. If the educational system is to prepare its students to meet the demand of an AI-infused future, it needs to integrate AI technology as a topic of the academic curriculum to equip students with essential knowledge and skills about AI. However, as AI technology is a new subject, to sustain students’ interests in learning the subject, there is a need to build students’ basic knowledge about AI, establish its relevance, build their confidence, and reduce their anxiety, all through careful curriculum design [
6,
7]. With such educational efforts, a talent pipeline can be built, creating a win-win solution for the sustainability for both individual students and the society.
Enhancing students’ readiness for an AI-infused future should be a goal of current and future educational programs [
5,
8]. As a new and growing technology, AI has been contextualized by rhetoric regarding its complexity and advancement, which might sound encouraging and intimidating for primary students [
9]. Ideally, AI education should demystify technology, shorten the distance between technology and daily lives, and help students gain basic understanding and skills related to AI. Studies have indicated that AI-related readiness perceived by students can considerately influence their learning behavior and choice for future study [
10]. Despite the significance of AI education programs, very few instruments or assessment tools have been developed to measure, evaluate, and monitor K-12 students’ degree of readiness after instruction for AI. Limited research has examined the status quo or influencing factors of student readiness for AI.
This study aimed to address the aforementioned issues by developing and validating an instrument to measure students’ readiness to learn about AI by using a research sample from multiple elementary schools. We designed a survey questionnaire comprising a set of factors selected to reflect students’ psychological well-being with reference to AI technologies. In addition, open-ended survey items were formulated for students to express their thoughts. The survey was administered in a school district in Beijing after an AI course was developed and implemented. The collected data and analytical results provided insights into the self-reported perceptions of primary students’ AI readiness and enabled the identification of factors that may influence this parameter. The qualitative analysis results of students’ open-ended responses provided further evidence that complemented the quantitative analysis. This paper describes the theoretical foundation, implementation, and validation of the designed instrument and presents the research findings.
3. Method
3.1. Background
AI education in the K-12 sector has been initiated and introduced with state policies in China since 2017. China’s State Council [
47] has published a policy document titled
New Generation Artificial Intelligence Development Plan, in which the K-12 sector is encouraged to develop an AI-relevant curriculum and promote AI literacy among students. Another state plan on education modernization was released in 2019 to promote the integration of “smart technology” with K-20 education and support the corresponding teacher professional development. Despite calls from the Chinese government, no central curriculum standard has been developed to guide the teaching and learning of AI in schools. Moreover, no AI-specific content knowledge is included in the current central curriculum. Thus, AI education is still in the experimental stage in China. Teachers and schools have adopted a grassroots approach to explore AI development. Moreover, some AI education programs for elementary and secondary schools have been designed and are currently being tested.
3.2. Participants
Against the aforementioned background, an AI education project was initiated by a school district in Beijing, China, in 2018. This project is led by a leading researcher in the field of computer sciences (CS), and 15 elementary schools within the aforementioned school district are participating in the project. In the aforementioned project, 25 CS teachers have worked together to develop an AI curriculum and a set of textbooks and materials and to implement the curriculum into their schools. From the participating schools, we used the purposive sampling method to target all 17 classes of 707 elementary students who are engaged in the AI course, since they are able to understand the survey. The invitations were distributed with the aid of the aforementioned CS teachers. The participants were fourth to sixth graders, and the average age of the participants was 9.95 years (SD = 1.08 years). A total of 57.2% of the participants were male students, and 42.8% of the participants were female students. The participants reported to have spent an average of 5.91 h (SD = 5.62 h) in AI learning and projects.
3.3. Instruments
To investigate students’ AI learning readiness and psychological well-being, a 4-point Likert scale was used as the instrument for data collection (1 = strongly disagree to 4 = strongly agree). The survey questionnaire consisted of two parts regarding students’ self-reported basic information (i.e., grade, gender, age, and AI learning hours) and responses to the psychological variables (i.e., readiness, confidence, anxiety, AI literacy, and relevance). All the items of the five psychological variables were adopted from a previously validated survey or developed according to the situated research context. The following open-ended item was developed: “Can you tell us your view about AI”?
The AI readiness construct comprised six items for measuring students’ views regarding current AI applications (e.g., “AI technology gives me more control over my daily life”). These items were adapted from Parasuraman’s [
48] Technology Readiness Index (the original scale has 10 items; α = 0.78).
The confidence in AI was assessed using a modified version of the confidence scale of the Attention, Relevance, Confidence, and Satisfaction (ARCS) model developed by Song and Keller (five items; α = 0.70) [
49] to measure students’ confidence when they were in a computer-assisted learning context (e.g., “I feel confident that I can learn basic concepts of AI in this AI class.”).
AI anxiety was assessed using a modified version of the Motivated Strategies for Learning Questionnaire (five items; α = 0.90) of Pintrich et al. [
50] to measure students’ anxiety in AI learning (e.g., “When I consider the capabilities of AI, I think about how difficult my future will be.”).
The AI literacy scale was self-constructed to measure students’ basic understanding of AI knowledge and skills. The constructed scale has five items and was designed according to the schools’ AI curriculum content (e.g., “I can use AI-assisted image search tools.”).
The relevance of AI was assessed using a modified version of the relevance scale of the ARCS model developed by Song and Keller (six items; α = 0.73) [
49] to measure the connections of AI (five items; e.g., “The things I am learning in this AI class will be useful to me.”).
The developed 26-item instrument was subjected to expert review to assess its content validity. Four scholars with over five research publications in the fields of curriculum evaluation and psychology were invited to review the draft questionnaire for identifying potential sources of nonexample errors and providing critical suggestions to minimize the errors. The draft questionnaire was revised and improved according to inputs from the expert reviewers. The revised questionnaire was shared with 12 teachers from the AI education project mentioned in
Section 3.2 through a focus group interview. The teachers read the survey items one by one with a focus on the statement and language to ensure that these items delivered their intended messages to elementary students in understandable language. After obtaining the teachers’ feedback, the survey was further revised and finalized.
3.4. Data Collection and Analysis
The developed survey was administered through the Internet. At the conclusion of the AI courses, the teachers introduced the survey to their students. The teachers then invited the students to share their perspectives and experiences through the survey. In particular, the teachers stressed and explained the anonymity of the survey to eliminate any concerns of the students. In total, 549 students participated in the survey, and the response rate was 77.65%. There was no missing data among the 549 responses.
The 549 samples were randomly divided into two subsamples to conduct exploratory factor analysis (EFA;
n = 220; 57.3% male students) and confirmatory factor analysis (CFA;
n = 329; 57.1% male students). We followed the relevant guidelines about the sample size to ensure we met the minimum requirements in both analyses [
51,
52]. The guidelines proposed by Gorsuch [
51] for EFA suggested a ratio of 5 participants per measured variable and a sample size of no less than 100. The first subsample comprising 220 student responses was analyzed through EFA to assess the validity and reliability of the measurement. EFA was conducted using the IBM Statistical Product and Service Solutions version 25. The other subsample comprising 329 student responses was used in CFA and SEM to test the proposed hypotheses. Regarding to sample size of CFA, Hair et al. [
52] suggested a larger sample size, considering several influencing factors: the number of latent variables, the lowest number of indicators in a latent variable and communalities. The appropriate sample size of this study should be >300. The skewness (ranged from −0.357 to −2.232) and kurtosis (ranged from −1.590 to 4.649) of the items did not exceed the cutoffs of |3| and |8|, respectively [
53], which indicated that the univariate normality was acceptable. The relationships among the factors were evaluated using various measures. Mardia’s coefficient was measured to ensure that the multivariate normality was acceptable [
54]. The Mardia’s coefficient value obtained in this study was 452.643, which is less than the recommended value of
p (
p + 2) = 22(24) = 528 (where
p is the number of observed variables). Thus, the requirement of multivariate normality was satisfied. Next, multigroup invariance analyses were conducted to compare gender differences by using AMOS 20.0. Vandenberg and Lance [
55] recommended stringent steps to examine the following types of invariance across groups: configural invariance, metric variance, and scalar variance. After the invariance tests, latent mean analysis was conducted to compare the means of the latent variables for the male and female students. The male student group served as the reference group, and the latent means of this group were constrained to be 0. Thus, the latent means of the female student group represented mean differences. Finally, the students’ open-ended responses were coded using open coding [
56,
57,
58]. Because most of the responses comprised only one sentence, each response was assigned one code (mostly an in vivo code; see Table 6). The qualitative responses were coded by two authors independently. One author coded all the data; the other author coded 30% of the full data. The inter-coder reliability was 0.92.
5. Discussion and Conclusions
The well-being of any individual depends considerably on whether they can learn to adapt to technological changes that are changing the socio-economic landscape [
28,
30]. In the current context, the importance of fostering students’ readiness for an AI-infused future has been voiced by many educators [
5,
6,
7]. To contribute to efforts in promoting AI literacy, this study formulated and validated a survey that examines students’ AI readiness and the factors associated with AI readiness for elementary school children. In addition, SEM was used to determine statistical predictive relationships. An open-ended item was analyzed to corroborate the research findings. The research findings are discussed in the following text.
Statistical analyses suggested that the survey instrument designed for assessing student readiness for AI was valid and reliable. The survey was determined to have a five-factor structure with divergent validity. Thus, the five factors were correlated but not highly correlated with each other. This research provides a valid survey to measure and examine the level of student readiness for AI as well as AI relevance, confidence, anxiety, and literacy in the context of an AI education program. The constructed survey may be used as an evaluation tool by teachers and educators for formative or summative purposes in assessing important psychological aspects of AI education programs. AI education programs must achieve relevance, reduce people’s anxiety toward AI, and promote learners’ confidence [
10,
41] to prepare psychologically well-adapted learners for an AI-infused future. AI readiness can be achieved by fostering AI literacy. Currently, multiple forms of digital literacy are gaining importance [
30]. In this context, the present study contributes to the literature by investigating the emerging factor of AI literacy. The validated survey can help teachers better understand and monitor students’ learning as well as reflect on the design of the AI curriculum and the associated teaching effectiveness.
The structural equation model indicated that four out of the six proposed hypotheses were supported. AI literacy was not predictive of AI readiness. Rather, the influence of AI literacy was mediated by the students’ confidence and perception of AI relevance. The aforementioned findings are in agreement with the general theory of planned behavior [
9] in that background factors such as knowledge (i.e., AI literacy) influence people’s behaviors through control beliefs (i.e., confidence) and their attitude toward the behavior (i.e., relevance). Thus, students’ perception of AI readiness is influenced by their confidence that they can learn and use AI knowledge as well as their assessment that AI knowledge is relevant to their lives. The aforementioned finding corresponds to that reported by Amit-Aharon et al. [
32], who indicated that literacy-based efficacy can contribute to the readiness to adopt new practices that involve using the new knowledge. The structural model highlights the need to design a high-quality AI education program that helps students to understand the relevance of AI knowledge and enhances their confidence in AI learning. The importance of relevance and confidence has been repeatedly identified in Keller’s model of motivational design of instruction [
10]. In the studied AI education program, examples on how AI is employed in people’s daily lives through smart phones and other computing devices and on the possibilities of using AI to solve problems such as health and traffic congestion can promote students’ AI relevance and confidence [
31]. Surprisingly, the students’ AI readiness was not influenced by a reduction in their anxiety regarding AI and an enhancement in their AI literacy. This result was obtained possibly because the students are young and are not directly facing the threat of job loss due to AI [
41]. The effects of the AI curriculum in building a strong and optimistic outlook should not be discounted. In the future, time-series studies can be performed to ascertain the effects of AI learning.
Gender differences [
23,
24,
26] are frequently noted in technology- and engineering-related fields of study. The present study did not find significant gender differences in the students’ AI anxiety and AI literacy. The aforementioned finding indicates that both genders are not developing negative views regarding AI and are equipped with a similar literacy base for the AI education program. Nonetheless, the male students reported a higher confidence, relevance, and readiness for AI than the female students did. Gender differences emerged in the early age of AI education. This finding may be related to the traditional cultural outlook that male individuals are more suited than female individuals for engineering subjects. In particular, in Asian societies, where patriarchal values and social norms keep gender inequalities alive, the stereotype threat exists in STEM education [
60,
61]. Traditionally, engineering and technology have been male-dominated fields, and some people believe that men are mathematically superior and better suited to engineering jobs than women are [
25,
27]. Influenced by such an implicit bias against women, female students may be less confident than male students regarding their abilities, even with equal AI literacy. They might also face difficulties in relating to the male-dominated field due to the presence of limited female role models [
62,
63,
64]. Due to this gender bias and self-efficacy, female students might perceive themselves as being less prepared or even not ready for an AI-infused future. Therefore, enhancing the student readiness for AI technology should not be limited to the school curriculum and classroom teaching. The entire society is obligated to take steps to forge a positive culture and send encouraging messages to female students for addressing gender equity issues in AI education. Ertl et al. [
25] recommended using role models to encourage female students to reduce the effects of stereotyping.
The sentiments among the students, as reflected by the open-ended responses, indicated that the students were generally excited to learn about AI and viewed AI as a powerful and useful technology. Few students were fearful and anxious regarding AI. The student sentiments confirmed the quantitative findings. Overall, the questionnaire survey indicated that with an appropriate curriculum design, young students can be encouraged to learn about AI, which can help prepare them for an AI-infused world.
There are some limitations in this study. Firstly, for each latent variable, there should be at least 10–20 participants to optimize the statistical outputs. While our sample size met the minimum requirement as stated in the method section, both samples were near the lower boundary of the minimum number of participants. Future research should involve larger samples to address this issue. Secondly, the participating students in this study were relatively young and they may not understand the full implications of the AI technology. It is suggested that the study should be repeated with senior students.