Promoting Students’ Well-Being by Developing Their Readiness for the Artiﬁcial Intelligence Age

: This study developed and validated an instrument to measure students’ readiness to learn about artiﬁcial intelligence (AI). The designed survey questionnaire was administrated in a school district in Beijing after an AI course was developed and implemented. The collected data and analytical results provided insights regarding the self-reported perceptions of primary students’ AI readiness and enabled the identiﬁcation of factors that may inﬂuence this parameter. The results indicated that AI literacy was not predictive of AI readiness. The inﬂuences of AI literacy were mediated by the students’ conﬁdence and perception of AI relevance. The students’ AI readiness was not inﬂuenced by a reduction in their anxiety regarding AI and an enhancement in their AI literacy. Male students reported a higher conﬁdence, relevance, and readiness for AI than female students did. The sentiments reﬂected by the open-ended responses of the students indicated that the students were generally excited to learn about AI and viewed AI as a powerful and useful technology. The student sentiments conﬁrmed the quantitative ﬁndings. The validated survey can help teachers better understand and monitor students’ learning, as well as reﬂect on the design of the AI curriculum and the associated teaching e ﬀ ectiveness.


Introduction
Today's students are facing massive challenges because new bodies of knowledge have been created due to the rapid development of technology. Artificial intelligence (AI) is an emerging field that has transformed modern society [1,2]. It has begun to redefine the knowledge and skills required by students to live well and work productively [3]. As stated by the United Nations in their 2030 agenda, quality education is a fundamental method of providing individuals with equal learning opportunities to support their sustainable development [4]. AI is crucial for sustainable economic growth and it is currently applied in fields such as green building, smart agriculture, medical diagnosis and modeling of disease spread, etc., all of which are related to the sustainability of human well-being [5]. If the educational system is to prepare its students to meet the demand of an AI-infused future, it needs to integrate AI technology as a topic of the academic curriculum to equip students with essential knowledge and skills about AI. However, as AI technology is a new subject, to sustain students' interests in learning the subject, there is a need to build students' basic knowledge about AI, establish its relevance, build their confidence, and reduce their anxiety, all through careful curriculum design [6,7]. With such educational efforts, a talent pipeline can be built, creating a win-win solution for the sustainability for both individual students and the society.
Enhancing students' readiness for an AI-infused future should be a goal of current and future educational programs [5,8]. As a new and growing technology, AI has been contextualized by rhetoric regarding its complexity and advancement, which might sound encouraging and intimidating for primary students [9]. Ideally, AI education should demystify technology, shorten the distance between technology and daily lives, and help students gain basic understanding and skills related to AI. Studies have indicated that AI-related readiness perceived by students can considerately influence their learning behavior and choice for future study [10]. Despite the significance of AI education programs, very few instruments or assessment tools have been developed to measure, evaluate, and monitor K-12 students' degree of readiness after instruction for AI. Limited research has examined the status quo or influencing factors of student readiness for AI.
This study aimed to address the aforementioned issues by developing and validating an instrument to measure students' readiness to learn about AI by using a research sample from multiple elementary schools. We designed a survey questionnaire comprising a set of factors selected to reflect students' psychological well-being with reference to AI technologies. In addition, open-ended survey items were formulated for students to express their thoughts. The survey was administered in a school district in Beijing after an AI course was developed and implemented. The collected data and analytical results provided insights into the self-reported perceptions of primary students' AI readiness and enabled the identification of factors that may influence this parameter. The qualitative analysis results of students' open-ended responses provided further evidence that complemented the quantitative analysis. This paper describes the theoretical foundation, implementation, and validation of the designed instrument and presents the research findings.

Literature Review
To identify appropriate factors for this study, several theoretical frameworks [11][12][13] were combined to form an overarching framework.

Student Readiness
One purpose of education is to prepare students to be ready for the future workplace. The concept of student readiness has been explored in various studies; however, these studies have defined the aforementioned concept differently, with varying focuses on students' psychological conditions, skillsets, and socio-economic resources [10]. From the psychological perspective, student readiness refers to the student sense of being equipped with the knowledge and skills required for future events and situations [14]. This notion of psychological readiness has been investigated in various educational contexts [15][16][17]. Studies on online learning have found that students' readiness is considerably influenced by their engagement with technology and their personal traits, such as self-direction and initiative. Engagement and personality traits together decide the online learning success of students [18][19][20]. The aforementioned studies have indicated that student readiness encompasses the self-concept or self-efficacy of students as well as their beliefs and self-judgment toward an event, which are developed according to their past experiences [17].
Achieving student readiness has long been a learning objective of technology-relevant education. In particular, school education based on the knowledge economy of the 21st century is expected to provide students with appropriate knowledge, attitude, and skills for professional life [21,22]. Considering the aforementioned perspective, AI-related courses should not only provide students with content knowledge related to AI but also empower them and make them feel comfortable and confident to participate in their professional lives. However, AI education in K-12 schools is still at the early stage of development. Limited empirical evidence exists for a positive relationship between AI-related educational opportunities and student readiness for an AI-infused future [23,24]. Moreover, research on student readiness in other contexts, such as higher education or online education, has provided mixed results regarding the potential effect of gender differences on student readiness. For example, Young [25] found that male students exhibited greater confidence and readiness than female students did in using computers; however, Shivers-Blackwell and Charles [26] found that female students exhibited a significantly higher readiness for technology change than male students did. Current research indicates that gender has a significant effect on science, technology, engineering, and mathematics (STEM) education. The effects of gender in STEM education favor male students more than female students [27][28][29]. Considering that AI is an emerging form of technology that may have a considerable influence on workplaces and education [30], the effect of gender differences on AI-related student readiness must be investigated.
To advance AI education and AI-related research, valid and reliable instruments or assessment tools must be developed for facilitating relevant investigations. Currently, limited knowledge is available on the feasibility of equipping young students with AI readiness and on variables that might influence the development of AI readiness. Without a valid instrument for examining the construct of student readiness, advancing a new avenue of theoretical investigation that incorporates the aforementioned variables into teaching and learning processes is difficult. Therefore, this study aimed to design a survey questionnaire to measure primary school students' perception of readiness for an AI-infused future.

Constructs for Measuring Student Readiness
The teaching and learning of AI should foster student readiness for an AI-infused future. Student readiness is not an independent factor but is interrelated with many other emotional and psychological factors [31]. To feel ready for an AI-infused future, students must have knowledge regarding AI and what an AI-infused future looks like. With such knowledge, they can understand the relevance and significance of AI technology and position themselves in the AI-infused future. Moreover, the acquisition of AI knowledge should enable students to form a foundation for obtaining confidence in AI learning and reducing their fears for a future full of uncertainty. Through the literature review conducted in this study, we identified four emotional and psychological factors that are theoretically associated with and are likely to affect student readiness: AI literacy, confidence, relevance, and anxiety.

AI Literacy
Azjen [11] characterized people's behaviors according to knowledge or beliefs acquired by them through formal or informal channels and their interpretations of the knowledge, which form their control beliefs and attitude toward a certain action. Control beliefs are generally referred to as self-efficacy and are closely related to the notion of literacy in the field of technology education. The notion of literacy refers to a user's ability to access, analyze, and use information to achieve an intended purpose [32]. Multiple forms of literacy associated with technological education, such as information, digital, and media literacy, have emerged. Sometimes, these terms are used without clear boundaries. Considering the increasing importance of AI, we refer to a student's ability to access and use AI-related knowledge and skills as AI literacy. The knowledge and skills associated with literacy are often included as the learning objectives of curriculum development. The intended knowledge also indicates the intended degree of student development after the instruction. The intended knowledge is based on the vision and scope of the curriculum, including what type of future society we are preparing the students for and what type of future citizens we want the students to grow into (see [33]). From this perspective, AI literacy can be considered as the knowledge foundation that provides students with an efficacious perception of AI technology, which could predict student readiness. A study indicated that information literacy self-efficacy predicts nursing students' future adoption of evidence-based practice [34] (see Hypothesis 1). In addition, AI literacy should be positively related to the student confidence according to the theory of planned action [11] (See Hypothesis 2). As students gain more AI-related knowledge and skills, they are likely to become more confident about themselves with regard to the AI-infused future. Such a positive effect can also mediate students' readiness.

Confidence
Confidence refers to one's belief that they have the capabilities to successfully execute a desired behavior (i.e., a belief of "I can get this job done") [35]. Numerous scholars have identified confidence as an important factor affecting students' performance and readiness. Major motivation theories, such as self-determination theory [36] and the theory of planned behaviors [11], have identified the significant effects of confidence on people's intention to act in a certain manner. Komarraju et al. [37] noted that the greater the confidence regarding one's ability, the more likely one is to perceive being prepared and ready for relevant learning or work opportunities (see Hypothesis 3). Enhancing the level of confidence, which influences the effective learning outcome, is an objective of instruction because confidence enables one to be continuously engaged in learning [12,23]. Differences in confidence levels must be understood in terms of a student's learning position for a specific subject matter.

Relevance
In the context of learning, relevance refers to the association between the subject matter being learned and a student's internal needs, future goals, or living environments [12,38]. Thus, relevance is investigated by assessing personal intrinsic and instrumental connections with a phenomenon. Consequently, relevance can be viewed as an attitude toward an event [11]. This study indicates how students perceive themselves in relation to AI technology and the AI-infused future. For students to form a perception of relevance, they must possess some knowledge regarding AI (see Hypothesis 4).
A sense of relevance can be developed through different methods during the instruction process [39][40][41][42]. AI-related teaching and learning often involve the use of concrete examples that are familiar to students or are from their daily lives. Teachers might also explain the present worth and future usefulness of AI technology. For example, they may explain how image recognition can be used to unlock mobile phones or detect cancer growth. The connection between AI knowledge and students' personal lives can help students to establish relevance, which makes the purpose of learning explicit. Relevance helps students to learn better [12] and thus serves to enhance one's sense of being ready (see Hypothesis 5). Once relevance is developed, students would be more motivated to learn regarding AI and better prepare themselves in an AI-infused world to make informed decisions regarding their educational and career pathways.

Anxiety
Technology usually triggers positive and negative affects among potential users. The effect of technology on potential users varies from them feeling optimistic regarding using the technology to them experiencing discomfort with the technology [13]. The experience of users with a technology may cause them to embrace or reject the use of the technology. Anxiety is a negative feeling that can be caused by AI. Wang and Wang [43] validated a four-factor 21-item AI anxiety scale for the general public. Anxiety refers to a conscious fearful emotional state. Technology-related anxiety can be defined as someone being apprehensive or fearful about using technology [44]. It includes negative beliefs regarding technology, insecurity, nervousness, fear, intimidation, and hesitation [45]. Negative emotions associated with technology can affect overall learning experiences with technology and its future adoption. Anxiety, confusion, frustration, anger, and similar emotional states can affect not only the learning process but also the productivity, social relationships, and overall well-being of individuals [46] (see Hypothesis 6). Reducing student anxiety toward AI technology is pedagogically sensible to better prepare them for an AI-infused future.
The aforementioned literature review indicates that AI literacy, confidence, relevance, and anxiety are related to student readiness. These factors were included in the survey conducted to measure student readiness and conceptions toward an AI-infused future. To further understand the relationship among the aforementioned five factors, we conducted structural equation modeling (SEM). The hypotheses for the SEM are as follows:

Hypothesis 1. AI literacy significantly and positively influences AI readiness.
Hypothesis 2. AI literacy significantly and positively influences students' confidence in learning AI.

Hypothesis 3.
Confidence in learning AI significantly and positively influences AI readiness. Hypothesis 4. AI literacy significantly and positively influences students' sense of relevance of AI.

Hypothesis 5.
Relevance of AI significantly and positively influences AI readiness. Hypothesis 6. AI anxiety significantly and negatively influences AI readiness.
The research questions examined in this study are as follows: Q1: Does the instrument designed to measure students' perception of readiness to learn AI provide an interpretable factor structure? Q2: What are the structural relationships among students' perception of readiness to learn AI, confidence, anxiety, AI literacy, and the relevance of AI learning? Q3: Do gender differences exist between male and female students' perceptions of readiness for AI learning? Q4: What are the students' sentiments toward AI as reflected by open-ended questions?

Background
AI education in the K-12 sector has been initiated and introduced with state policies in China since 2017. China's State Council [47] has published a policy document titled New Generation Artificial Intelligence Development Plan, in which the K-12 sector is encouraged to develop an AI-relevant curriculum and promote AI literacy among students. Another state plan on education modernization was released in 2019 to promote the integration of "smart technology" with K-20 education and support the corresponding teacher professional development. Despite calls from the Chinese government, no central curriculum standard has been developed to guide the teaching and learning of AI in schools. Moreover, no AI-specific content knowledge is included in the current central curriculum. Thus, AI education is still in the experimental stage in China. Teachers and schools have adopted a grassroots approach to explore AI development. Moreover, some AI education programs for elementary and secondary schools have been designed and are currently being tested.

Participants
Against the aforementioned background, an AI education project was initiated by a school district in Beijing, China, in 2018. This project is led by a leading researcher in the field of computer sciences (CS), and 15 elementary schools within the aforementioned school district are participating in the project. In the aforementioned project, 25 CS teachers have worked together to develop an AI curriculum and a set of textbooks and materials and to implement the curriculum into their schools. From the participating schools, we used the purposive sampling method to target all 17 classes of 707 elementary students who are engaged in the AI course, since they are able to understand the survey. The invitations were distributed with the aid of the aforementioned CS teachers. The participants were fourth to sixth graders, and the average age of the participants was 9.95 years (SD = 1.08 years). A total of 57.2% of the participants were male students, and 42.8% of the participants were female students. The participants reported to have spent an average of 5.91 h (SD = 5.62 h) in AI learning and projects.

Instruments
To investigate students' AI learning readiness and psychological well-being, a 4-point Likert scale was used as the instrument for data collection (1 = strongly disagree to 4 = strongly agree). The survey questionnaire consisted of two parts regarding students' self-reported basic information (i.e., grade, gender, age, and AI learning hours) and responses to the psychological variables (i.e., readiness, confidence, anxiety, AI literacy, and relevance). All the items of the five psychological variables were adopted from a previously validated survey or developed according to the situated research context. The following open-ended item was developed: "Can you tell us your view about AI"?
The AI readiness construct comprised six items for measuring students' views regarding current AI applications (e.g., "AI technology gives me more control over my daily life"). These items were adapted from Parasuraman's [48] Technology Readiness Index (the original scale has 10 items; α = 0.78).
The confidence in AI was assessed using a modified version of the confidence scale of the Attention, Relevance, Confidence, and Satisfaction (ARCS) model developed by Song and Keller (five items; α = 0.70) [49] to measure students' confidence when they were in a computer-assisted learning context (e.g., "I feel confident that I can learn basic concepts of AI in this AI class.").
AI anxiety was assessed using a modified version of the Motivated Strategies for Learning Questionnaire (five items; α = 0.90) of Pintrich et al. [50] to measure students' anxiety in AI learning (e.g., "When I consider the capabilities of AI, I think about how difficult my future will be.").
The AI literacy scale was self-constructed to measure students' basic understanding of AI knowledge and skills. The constructed scale has five items and was designed according to the schools' AI curriculum content (e.g., "I can use AI-assisted image search tools.").
The relevance of AI was assessed using a modified version of the relevance scale of the ARCS model developed by Song and Keller (six items; α = 0.73) [49] to measure the connections of AI (five items; e.g., "The things I am learning in this AI class will be useful to me.").
The developed 26-item instrument was subjected to expert review to assess its content validity. Four scholars with over five research publications in the fields of curriculum evaluation and psychology were invited to review the draft questionnaire for identifying potential sources of nonexample errors and providing critical suggestions to minimize the errors. The draft questionnaire was revised and improved according to inputs from the expert reviewers. The revised questionnaire was shared with 12 teachers from the AI education project mentioned in Section 3.2 through a focus group interview. The teachers read the survey items one by one with a focus on the statement and language to ensure that these items delivered their intended messages to elementary students in understandable language. After obtaining the teachers' feedback, the survey was further revised and finalized.

Data Collection and Analysis
The developed survey was administered through the Internet. At the conclusion of the AI courses, the teachers introduced the survey to their students. The teachers then invited the students to share their perspectives and experiences through the survey. In particular, the teachers stressed and explained the anonymity of the survey to eliminate any concerns of the students. In total, 549 students participated in the survey, and the response rate was 77.65%. There was no missing data among the 549 responses.
The 549 samples were randomly divided into two subsamples to conduct exploratory factor analysis (EFA; n = 220; 57.3% male students) and confirmatory factor analysis (CFA; n = 329; 57.1% male students). We followed the relevant guidelines about the sample size to ensure we met the minimum requirements in both analyses [51,52]. The guidelines proposed by Gorsuch [51] for EFA suggested a ratio of 5 participants per measured variable and a sample size of no less than 100. The first subsample comprising 220 student responses was analyzed through EFA to assess the validity and reliability of the measurement. EFA was conducted using the IBM Statistical Product and Service Solutions version 25. The other subsample comprising 329 student responses was used in CFA and SEM to test the proposed hypotheses. Regarding to sample size of CFA, Hair et al. [52] suggested a larger sample size, considering several influencing factors: the number of latent variables, the lowest number of indicators in a latent variable and communalities. The appropriate sample size of this study should be >300. The skewness (ranged from −0.357 to −2.232) and kurtosis (ranged from −1.590 to 4.649) of the items did not exceed the cutoffs of |3| and |8|, respectively [53], which indicated that the univariate normality was acceptable. The relationships among the factors were evaluated using various measures. Mardia's coefficient was measured to ensure that the multivariate normality was acceptable [54]. The Mardia's coefficient value obtained in this study was 452.643, which is less than the recommended value of p (p + 2) = 22(24) = 528 (where p is the number of observed variables). Thus, the requirement of multivariate normality was satisfied. Next, multigroup invariance analyses were conducted to compare gender differences by using AMOS 20.0. Vandenberg and Lance [55] recommended stringent steps to examine the following types of invariance across groups: configural invariance, metric variance, and scalar variance. After the invariance tests, latent mean analysis was conducted to compare the means of the latent variables for the male and female students. The male student group served as the reference group, and the latent means of this group were constrained to be 0. Thus, the latent means of the female student group represented mean differences. Finally, the students' open-ended responses were coded using open coding [56][57][58]. Because most of the responses comprised only one sentence, each response was assigned one code (mostly an in vivo code; see Table 6). The qualitative responses were coded by two authors independently. One author coded all the data; the other author coded 30% of the full data. The inter-coder reliability was 0.92.

Factor Analysis of the Survey
To validate the survey, EFA was performed with principal axis factoring analysis and the direct oblimin rotation method for clarifying the factor structure of the survey. The factor analysis results indicated that the measures of sampling adequacy were acceptable: Kaiser-Meyer-Olkin value was 0.908, Bartlett's test of sphericity = 4260.094 (df = 231, p < 0.001), and that 71.98% of the 307 total variance was explained.
As shown in Table 1, a total of 22 items (overall α = 0.90) were grouped into the following factors: AI readiness, AI anxiety, AI literacy, relevance of AI, and confidence in AI. The reliability (α) coefficients of the aforementioned five factors were 0.92, 0.94, 0.90, 0.91, and 0.89, respectively, which suggested that the aforementioned five factors were sufficiently reliable for assessing the students' AI learning. Four items were removed because of insufficient factor loadings or cross loadings.
The average variance extracted (AVE) and composite reliability (CR) were also computed. All the CR values exceeded 0. Pearson's correlations were computed for the first subsample to examine the relationships among AI readiness, confidence in AI, AI anxiety, AI literacy, and relevance of AI. Table 2 indicates that significant and positive relationships exist among AI readiness, confidence in AI, AI knowledge efficiency, and relevance of AI (from r = 0.59 to r = 0.70, p < 0.001), which suggests the existence of significant positive correlations among these factors. However, AI anxiety was not significantly correlated with AI readiness, AI literacy, relevance of AI, and confidence in AI (from r = −0.01 to r = −0.07, p > 0.05). This finding indicates that enhancing students' confidence and AI literacy as well as promoting the relevance of AI may facilitate students' AI readiness. The diagonal elements in the matrix in Table 2 are the square roots of the AVE values. The values of these elements are higher than those of its corresponding rows and columns; thus, the questionnaire has satisfactory discriminant validity.

SEM of the Students' AI Learning
After a five-factor structure was identified through EFA, we examined the proposed hypotheses by using AMOS. CFA was conducted to confirm the contract validity and the structure of students' AI learning. The model fitting results were as follows: χ 2 /df = 2.30, GFI = 0.89, TLI = 0.95, CFI = 0.96, and RMSEA = 0.063 [59].
The following fitness indices were obtained through SEM: χ 2 /df = 2.45, GFI = 0.87, TLI = 0.94, CFI = 0.94, and RMSEA = 0.067 [59]. Path analysis was used to examine the relationships among the aforementioned variables. As displayed in Figure 1, the students' confidence in learning AI (b = 0.67, p < 0.001) and perception regarding the relevance of AI (b = 0.70, p < 0.001) can be inferred from their AI literacy. However, AI readiness could not be inferred from AI literacy directly (b = 0.02, p > 0.05). The students' AI readiness can be inferred from their AI confidence (b = 0.46, p < 0.001). Moreover, students' AI readiness can be inferred from their perception of AI relevance (b = 0.37, p < 0.001). However, the students' AI readiness anxiety could not be inferred by their anxiety (b = −0.06, p > 0.05).
The results obtained for the proposed hypotheses are presented in Table 3. The results indicated that the students' perception of AI anxiety and AI literacy did not influence their AI readiness. Building students' AI literacy enhances their AI confidence and perception of AI relevance, which in turn enhances their AI readiness.
Sustainability 2019, 11, x FOR PEER REVIEW 9 of 15 students' AI literacy enhances their AI confidence and perception of AI relevance, which in turn enhances their AI readiness.

Gender Differences
Multigroup SEM was conducted to assess the configural invariance by analyzing the male and female groups without constraining equality across the groups. As presented in Table 4, the results exhibited a suitable goodness of fit (χ 2 = 876.605, df = 398, χ 2 /df = 2.20, CFI = 0.931, and RMSEA = 0.051) in the configural invariance test, which indicated that the structural patterns were similar across the two groups. Thus, it can be a baseline to compare in the metric invariance test. Next, a metric invariance test was conducted by constraining the factor loadings to be equal across the aforementioned groups. A good model fit (χ 2 = 908.730, df = 415, χ 2 /df = 2.19, CFI = 0.929, and RMSEA = 0.051) was obtained in the aforementioned test. Finally, a scalar invariance test was conducted by constraining the intercepts across the groups to be invariant. A good model fit (χ 2 = 954.292, df = 437, χ 2 /df = 2.18, CFI = 0.926, and RMSEA = 0.051) was obtained in the aforementioned test. Because the chi-square value is sensitive to the sample size, Cheung and Rensvold (2002) recommended that a CFI change smaller than 0.01 is used as a criterion for evaluating hypotheses.

Gender Differences
Multigroup SEM was conducted to assess the configural invariance by analyzing the male and female groups without constraining equality across the groups. As presented in Table 4, the results exhibited a suitable goodness of fit (χ 2 = 876.605, df = 398, χ 2 /df = 2.20, CFI = 0.931, and RMSEA = 0.051) in the configural invariance test, which indicated that the structural patterns were similar across the two groups. Thus, it can be a baseline to compare in the metric invariance test. Next, a metric invariance test was conducted by constraining the factor loadings to be equal across the aforementioned groups. A good model fit (χ 2 = 908.730, df = 415, χ 2 /df = 2.19, CFI = 0.929, and RMSEA = 0.051) was obtained in the aforementioned test. Finally, a scalar invariance test was conducted by constraining the intercepts across the groups to be invariant. A good model fit (χ 2 = 954.292, df = 437, χ 2 /df = 2.18, CFI = 0.926, and RMSEA = 0.051) was obtained in the aforementioned test. Because the chi-square value is sensitive to the sample size, Cheung and Rensvold (2002) recommended that a CFI change smaller than 0.01 is used as a criterion for evaluating hypotheses.

Latent Mean Analysis
The latent mean of the male group was assumed to be 0 because this group was considered as a reference group for comparisons with the female group. For this assumption, the latent mean represented a good model fit (χ 2 = 931.099, df = 432, χ 2 /df = 2.155, CFI = 0.926, and RMSEA = 0.051). Table 5 presents the results of the four latent variables with significant gender differences for the relevance of AI, AI literacy, and confidence in AI. The male students reported higher mean values than the female students for three factors: the relevance of AI, AI literacy, and confidence in AI. However, no significant mean differences were found across the gender groups for AI anxiety.

Analyses of Students' Open-Ended Responses
All the students' open-ended responses were coded as presented in Table 6 by using in vivo codes (i.e., using the students' words) as far as possible. The comments were further categorized as positive, neutral, or negative to clearly indicate the students' broad sentiment regarding AI. Better AI is getting better 9 1.6% "I think AI will be better in the future."

Intend
Intend to learn more 20 3.6% "AI is very interesting and I will continue to learn AI."

Social benefits
Use AI to promote social benefits 9 1.6% "Serve others better with artificial intelligence"

Convenience
AI is convenience 53 9.7% "Good for us is convenience and simplicity" Difficulty in understanding 6 1.1% "I think artificial intelligence is good, but hard to learn" Fear Fear of AI 20 3.6% "Artificial intelligence will cause many people to lose their jobs and even human extinction which will be very bad" Although a quarter of the students did not answer the open-ended questions, the majority of the responses were positive. In general, the students believed that AI is useful (20.4%), convenient (9.7%), good (10.9%), interesting (6.2%), and likable (5.6%). Some students perceived AI as an advanced technology (4.6%) that is getting better and stronger each day (1.7%). Moreover, certain students intend to learn more about AI (3.6%) and use it to help others (social; 1.6%). More than 60% of the students were positive about learning AI. The mixed response group (3.7%) appreciated the positive aspects of AI but had fears and worries about AI replacing jobs and taking control of the world. The fear group (3.8%) shared these concerns. Only around 1% of the students felt that AI was difficult to learn. Overall, the findings correspond well with the quantitative data. The mean scores of factors such as confidence, readiness, AI literacy, and relevance were positive (i.e., treating 2.5 as the mid-point for a four-point Likert scale). The mean score of anxiety indicated that the students were not anxious (i.e., <2.5; see Table 1). It may be necessary to address the common fear among the students that machines may take over the world. This sentiment is likely to be a popular misconception propagated by Hollywood movies rather than the science of AI.

Discussion and Conclusions
The well-being of any individual depends considerably on whether they can learn to adapt to technological changes that are changing the socio-economic landscape [28,30]. In the current context, the importance of fostering students' readiness for an AI-infused future has been voiced by many educators [5][6][7]. To contribute to efforts in promoting AI literacy, this study formulated and validated a survey that examines students' AI readiness and the factors associated with AI readiness for elementary school children. In addition, SEM was used to determine statistical predictive relationships. An open-ended item was analyzed to corroborate the research findings. The research findings are discussed in the following text.
Statistical analyses suggested that the survey instrument designed for assessing student readiness for AI was valid and reliable. The survey was determined to have a five-factor structure with divergent validity. Thus, the five factors were correlated but not highly correlated with each other. This research provides a valid survey to measure and examine the level of student readiness for AI as well as AI relevance, confidence, anxiety, and literacy in the context of an AI education program. The constructed survey may be used as an evaluation tool by teachers and educators for formative or summative purposes in assessing important psychological aspects of AI education programs. AI education programs must achieve relevance, reduce people's anxiety toward AI, and promote learners' confidence [10,41] to prepare psychologically well-adapted learners for an AI-infused future. AI readiness can be achieved by fostering AI literacy. Currently, multiple forms of digital literacy are gaining importance [30]. In this context, the present study contributes to the literature by investigating the emerging factor of AI literacy. The validated survey can help teachers better understand and monitor students' learning as well as reflect on the design of the AI curriculum and the associated teaching effectiveness. The structural equation model indicated that four out of the six proposed hypotheses were supported. AI literacy was not predictive of AI readiness. Rather, the influence of AI literacy was mediated by the students' confidence and perception of AI relevance. The aforementioned findings are in agreement with the general theory of planned behavior [9] in that background factors such as knowledge (i.e., AI literacy) influence people's behaviors through control beliefs (i.e., confidence) and their attitude toward the behavior (i.e., relevance). Thus, students' perception of AI readiness is influenced by their confidence that they can learn and use AI knowledge as well as their assessment that AI knowledge is relevant to their lives. The aforementioned finding corresponds to that reported by Amit-Aharon et al. [32], who indicated that literacy-based efficacy can contribute to the readiness to adopt new practices that involve using the new knowledge. The structural model highlights the need to design a high-quality AI education program that helps students to understand the relevance of AI knowledge and enhances their confidence in AI learning. The importance of relevance and confidence has been repeatedly identified in Keller's model of motivational design of instruction [10]. In the studied AI education program, examples on how AI is employed in people's daily lives through smart phones and other computing devices and on the possibilities of using AI to solve problems such as health and traffic congestion can promote students' AI relevance and confidence [31]. Surprisingly, the students' AI readiness was not influenced by a reduction in their anxiety regarding AI and an enhancement in their AI literacy. This result was obtained possibly because the students are young and are not directly facing the threat of job loss due to AI [41]. The effects of the AI curriculum in building a strong and optimistic outlook should not be discounted. In the future, time-series studies can be performed to ascertain the effects of AI learning.
Gender differences [23,24,26] are frequently noted in technology-and engineering-related fields of study. The present study did not find significant gender differences in the students' AI anxiety and AI literacy. The aforementioned finding indicates that both genders are not developing negative views regarding AI and are equipped with a similar literacy base for the AI education program. Nonetheless, the male students reported a higher confidence, relevance, and readiness for AI than the female students did. Gender differences emerged in the early age of AI education. This finding may be related to the traditional cultural outlook that male individuals are more suited than female individuals for engineering subjects. In particular, in Asian societies, where patriarchal values and social norms keep gender inequalities alive, the stereotype threat exists in STEM education [60,61]. Traditionally, engineering and technology have been male-dominated fields, and some people believe that men are mathematically superior and better suited to engineering jobs than women are [25,27]. Influenced by such an implicit bias against women, female students may be less confident than male students regarding their abilities, even with equal AI literacy. They might also face difficulties in relating to the male-dominated field due to the presence of limited female role models [62][63][64]. Due to this gender bias and self-efficacy, female students might perceive themselves as being less prepared or even not ready for an AI-infused future. Therefore, enhancing the student readiness for AI technology should not be limited to the school curriculum and classroom teaching. The entire society is obligated to take steps to forge a positive culture and send encouraging messages to female students for addressing gender equity issues in AI education. Ertl et al. [25] recommended using role models to encourage female students to reduce the effects of stereotyping.
The sentiments among the students, as reflected by the open-ended responses, indicated that the students were generally excited to learn about AI and viewed AI as a powerful and useful technology. Few students were fearful and anxious regarding AI. The student sentiments confirmed the quantitative findings. Overall, the questionnaire survey indicated that with an appropriate curriculum design, young students can be encouraged to learn about AI, which can help prepare them for an AI-infused world.
There are some limitations in this study. Firstly, for each latent variable, there should be at least 10-20 participants to optimize the statistical outputs. While our sample size met the minimum requirement as stated in the method section, both samples were near the lower boundary of the minimum number of participants. Future research should involve larger samples to address this issue. Secondly, the participating students in this study were relatively young and they may not understand the full implications of the AI technology. It is suggested that the study should be repeated with senior students.