Next Article in Journal
Role of Information in Farmers’ Response to Weather and Water Related Stresses in the Lower Bengal Delta, Bangladesh
Next Article in Special Issue
How Can Organizational Justice Contribute to Job Satisfaction? A Chained Mediation Model
Previous Article in Journal
A Holistic Review of the Present and Future Drivers of the Renewable Energy Mix in Maharashtra, State of India
Previous Article in Special Issue
Satisfaction with Life Scale Among Italian Workers: Reliability, Factor Structure and Validity through a Big Sample Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Promoting Students’ Well-Being by Developing Their Readiness for the Artificial Intelligence Age

1
Department of Curriculum and Instruction, The Chinese University of Hong Kong, Hong Kong, China
2
Centre for Learning Sciences and Technologies, The Chinese University of Hong Kong, Hong Kong, China
3
Teacher Training and Development Centre of Chaoyang District, Beijing 100082, China
4
School of Mechanical-Electronic and Vehicle Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100081, China
*
Authors to whom correspondence should be addressed.
Sustainability 2020, 12(16), 6597; https://doi.org/10.3390/su12166597
Submission received: 21 July 2020 / Revised: 8 August 2020 / Accepted: 12 August 2020 / Published: 14 August 2020

Abstract

:
This study developed and validated an instrument to measure students’ readiness to learn about artificial intelligence (AI). The designed survey questionnaire was administrated in a school district in Beijing after an AI course was developed and implemented. The collected data and analytical results provided insights regarding the self-reported perceptions of primary students’ AI readiness and enabled the identification of factors that may influence this parameter. The results indicated that AI literacy was not predictive of AI readiness. The influences of AI literacy were mediated by the students’ confidence and perception of AI relevance. The students’ AI readiness was not influenced by a reduction in their anxiety regarding AI and an enhancement in their AI literacy. Male students reported a higher confidence, relevance, and readiness for AI than female students did. The sentiments reflected by the open-ended responses of the students indicated that the students were generally excited to learn about AI and viewed AI as a powerful and useful technology. The student sentiments confirmed the quantitative findings. The validated survey can help teachers better understand and monitor students’ learning, as well as reflect on the design of the AI curriculum and the associated teaching effectiveness.

1. Introduction

Today’s students are facing massive challenges because new bodies of knowledge have been created due to the rapid development of technology. Artificial intelligence (AI) is an emerging field that has transformed modern society [1,2]. It has begun to redefine the knowledge and skills required by students to live well and work productively [3]. As stated by the United Nations in their 2030 agenda, quality education is a fundamental method of providing individuals with equal learning opportunities to support their sustainable development [4]. AI is crucial for sustainable economic growth and it is currently applied in fields such as green building, smart agriculture, medical diagnosis and modeling of disease spread, etc., all of which are related to the sustainability of human well-being [5]. If the educational system is to prepare its students to meet the demand of an AI-infused future, it needs to integrate AI technology as a topic of the academic curriculum to equip students with essential knowledge and skills about AI. However, as AI technology is a new subject, to sustain students’ interests in learning the subject, there is a need to build students’ basic knowledge about AI, establish its relevance, build their confidence, and reduce their anxiety, all through careful curriculum design [6,7]. With such educational efforts, a talent pipeline can be built, creating a win-win solution for the sustainability for both individual students and the society.
Enhancing students’ readiness for an AI-infused future should be a goal of current and future educational programs [5,8]. As a new and growing technology, AI has been contextualized by rhetoric regarding its complexity and advancement, which might sound encouraging and intimidating for primary students [9]. Ideally, AI education should demystify technology, shorten the distance between technology and daily lives, and help students gain basic understanding and skills related to AI. Studies have indicated that AI-related readiness perceived by students can considerately influence their learning behavior and choice for future study [10]. Despite the significance of AI education programs, very few instruments or assessment tools have been developed to measure, evaluate, and monitor K-12 students’ degree of readiness after instruction for AI. Limited research has examined the status quo or influencing factors of student readiness for AI.
This study aimed to address the aforementioned issues by developing and validating an instrument to measure students’ readiness to learn about AI by using a research sample from multiple elementary schools. We designed a survey questionnaire comprising a set of factors selected to reflect students’ psychological well-being with reference to AI technologies. In addition, open-ended survey items were formulated for students to express their thoughts. The survey was administered in a school district in Beijing after an AI course was developed and implemented. The collected data and analytical results provided insights into the self-reported perceptions of primary students’ AI readiness and enabled the identification of factors that may influence this parameter. The qualitative analysis results of students’ open-ended responses provided further evidence that complemented the quantitative analysis. This paper describes the theoretical foundation, implementation, and validation of the designed instrument and presents the research findings.

2. Literature Review

To identify appropriate factors for this study, several theoretical frameworks [11,12,13] were combined to form an overarching framework.

2.1. Student Readiness

One purpose of education is to prepare students to be ready for the future workplace. The concept of student readiness has been explored in various studies; however, these studies have defined the aforementioned concept differently, with varying focuses on students’ psychological conditions, skillsets, and socio-economic resources [10]. From the psychological perspective, student readiness refers to the student sense of being equipped with the knowledge and skills required for future events and situations [14]. This notion of psychological readiness has been investigated in various educational contexts [15,16,17]. Studies on online learning have found that students’ readiness is considerably influenced by their engagement with technology and their personal traits, such as self-direction and initiative. Engagement and personality traits together decide the online learning success of students [18,19,20]. The aforementioned studies have indicated that student readiness encompasses the self-concept or self-efficacy of students as well as their beliefs and self-judgment toward an event, which are developed according to their past experiences [17].
Achieving student readiness has long been a learning objective of technology-relevant education. In particular, school education based on the knowledge economy of the 21st century is expected to provide students with appropriate knowledge, attitude, and skills for professional life [21,22]. Considering the aforementioned perspective, AI-related courses should not only provide students with content knowledge related to AI but also empower them and make them feel comfortable and confident to participate in their professional lives. However, AI education in K-12 schools is still at the early stage of development. Limited empirical evidence exists for a positive relationship between AI-related educational opportunities and student readiness for an AI-infused future [23,24]. Moreover, research on student readiness in other contexts, such as higher education or online education, has provided mixed results regarding the potential effect of gender differences on student readiness. For example, Young [25] found that male students exhibited greater confidence and readiness than female students did in using computers; however, Shivers-Blackwell and Charles [26] found that female students exhibited a significantly higher readiness for technology change than male students did. Current research indicates that gender has a significant effect on science, technology, engineering, and mathematics (STEM) education. The effects of gender in STEM education favor male students more than female students [27,28,29]. Considering that AI is an emerging form of technology that may have a considerable influence on workplaces and education [30], the effect of gender differences on AI-related student readiness must be investigated.
To advance AI education and AI-related research, valid and reliable instruments or assessment tools must be developed for facilitating relevant investigations. Currently, limited knowledge is available on the feasibility of equipping young students with AI readiness and on variables that might influence the development of AI readiness. Without a valid instrument for examining the construct of student readiness, advancing a new avenue of theoretical investigation that incorporates the aforementioned variables into teaching and learning processes is difficult. Therefore, this study aimed to design a survey questionnaire to measure primary school students’ perception of readiness for an AI-infused future.

2.2. Constructs for Measuring Student Readiness

The teaching and learning of AI should foster student readiness for an AI-infused future. Student readiness is not an independent factor but is interrelated with many other emotional and psychological factors [31]. To feel ready for an AI-infused future, students must have knowledge regarding AI and what an AI-infused future looks like. With such knowledge, they can understand the relevance and significance of AI technology and position themselves in the AI-infused future. Moreover, the acquisition of AI knowledge should enable students to form a foundation for obtaining confidence in AI learning and reducing their fears for a future full of uncertainty. Through the literature review conducted in this study, we identified four emotional and psychological factors that are theoretically associated with and are likely to affect student readiness: AI literacy, confidence, relevance, and anxiety.

2.2.1. AI Literacy

Azjen [11] characterized people’s behaviors according to knowledge or beliefs acquired by them through formal or informal channels and their interpretations of the knowledge, which form their control beliefs and attitude toward a certain action. Control beliefs are generally referred to as self-efficacy and are closely related to the notion of literacy in the field of technology education. The notion of literacy refers to a user’s ability to access, analyze, and use information to achieve an intended purpose [32]. Multiple forms of literacy associated with technological education, such as information, digital, and media literacy, have emerged. Sometimes, these terms are used without clear boundaries. Considering the increasing importance of AI, we refer to a student’s ability to access and use AI-related knowledge and skills as AI literacy. The knowledge and skills associated with literacy are often included as the learning objectives of curriculum development. The intended knowledge also indicates the intended degree of student development after the instruction. The intended knowledge is based on the vision and scope of the curriculum, including what type of future society we are preparing the students for and what type of future citizens we want the students to grow into (see [33]). From this perspective, AI literacy can be considered as the knowledge foundation that provides students with an efficacious perception of AI technology, which could predict student readiness. A study indicated that information literacy self-efficacy predicts nursing students’ future adoption of evidence-based practice [34] (see Hypothesis 1). In addition, AI literacy should be positively related to the student confidence according to the theory of planned action [11] (See Hypothesis 2). As students gain more AI-related knowledge and skills, they are likely to become more confident about themselves with regard to the AI-infused future. Such a positive effect can also mediate students’ readiness.

2.2.2. Confidence

Confidence refers to one’s belief that they have the capabilities to successfully execute a desired behavior (i.e., a belief of “I can get this job done”) [35]. Numerous scholars have identified confidence as an important factor affecting students’ performance and readiness. Major motivation theories, such as self-determination theory [36] and the theory of planned behaviors [11], have identified the significant effects of confidence on people’s intention to act in a certain manner. Komarraju et al. [37] noted that the greater the confidence regarding one’s ability, the more likely one is to perceive being prepared and ready for relevant learning or work opportunities (see Hypothesis 3). Enhancing the level of confidence, which influences the effective learning outcome, is an objective of instruction because confidence enables one to be continuously engaged in learning [12,23]. Differences in confidence levels must be understood in terms of a student’s learning position for a specific subject matter.

2.2.3. Relevance

In the context of learning, relevance refers to the association between the subject matter being learned and a student’s internal needs, future goals, or living environments [12,38]. Thus, relevance is investigated by assessing personal intrinsic and instrumental connections with a phenomenon. Consequently, relevance can be viewed as an attitude toward an event [11]. This study indicates how students perceive themselves in relation to AI technology and the AI-infused future. For students to form a perception of relevance, they must possess some knowledge regarding AI (see Hypothesis 4).
A sense of relevance can be developed through different methods during the instruction process [39,40,41,42]. AI-related teaching and learning often involve the use of concrete examples that are familiar to students or are from their daily lives. Teachers might also explain the present worth and future usefulness of AI technology. For example, they may explain how image recognition can be used to unlock mobile phones or detect cancer growth. The connection between AI knowledge and students’ personal lives can help students to establish relevance, which makes the purpose of learning explicit. Relevance helps students to learn better [12] and thus serves to enhance one’s sense of being ready (see Hypothesis 5). Once relevance is developed, students would be more motivated to learn regarding AI and better prepare themselves in an AI-infused world to make informed decisions regarding their educational and career pathways.

2.2.4. Anxiety

Technology usually triggers positive and negative affects among potential users. The effect of technology on potential users varies from them feeling optimistic regarding using the technology to them experiencing discomfort with the technology [13]. The experience of users with a technology may cause them to embrace or reject the use of the technology. Anxiety is a negative feeling that can be caused by AI. Wang and Wang [43] validated a four-factor 21-item AI anxiety scale for the general public. Anxiety refers to a conscious fearful emotional state. Technology-related anxiety can be defined as someone being apprehensive or fearful about using technology [44]. It includes negative beliefs regarding technology, insecurity, nervousness, fear, intimidation, and hesitation [45]. Negative emotions associated with technology can affect overall learning experiences with technology and its future adoption. Anxiety, confusion, frustration, anger, and similar emotional states can affect not only the learning process but also the productivity, social relationships, and overall well-being of individuals [46] (see Hypothesis 6). Reducing student anxiety toward AI technology is pedagogically sensible to better prepare them for an AI-infused future.
The aforementioned literature review indicates that AI literacy, confidence, relevance, and anxiety are related to student readiness. These factors were included in the survey conducted to measure student readiness and conceptions toward an AI-infused future. To further understand the relationship among the aforementioned five factors, we conducted structural equation modeling (SEM). The hypotheses for the SEM are as follows:
Hypothesis 1.
AI literacy significantly and positively influences AI readiness.
Hypothesis 2.
AI literacy significantly and positively influences students’ confidence in learning AI.
Hypothesis 3.
Confidence in learning AI significantly and positively influences AI readiness.
Hypothesis 4.
AI literacy significantly and positively influences students’ sense of relevance of AI.
Hypothesis 5.
Relevance of AI significantly and positively influences AI readiness.
Hypothesis 6.
AI anxiety significantly and negatively influences AI readiness.
The research questions examined in this study are as follows:
Q1:
Does the instrument designed to measure students’ perception of readiness to learn AI provide an interpretable factor structure?
Q2:
What are the structural relationships among students’ perception of readiness to learn AI, confidence, anxiety, AI literacy, and the relevance of AI learning?
Q3:
Do gender differences exist between male and female students’ perceptions of readiness for AI learning?
Q4:
What are the students’ sentiments toward AI as reflected by open-ended questions?

3. Method

3.1. Background

AI education in the K-12 sector has been initiated and introduced with state policies in China since 2017. China’s State Council [47] has published a policy document titled New Generation Artificial Intelligence Development Plan, in which the K-12 sector is encouraged to develop an AI-relevant curriculum and promote AI literacy among students. Another state plan on education modernization was released in 2019 to promote the integration of “smart technology” with K-20 education and support the corresponding teacher professional development. Despite calls from the Chinese government, no central curriculum standard has been developed to guide the teaching and learning of AI in schools. Moreover, no AI-specific content knowledge is included in the current central curriculum. Thus, AI education is still in the experimental stage in China. Teachers and schools have adopted a grassroots approach to explore AI development. Moreover, some AI education programs for elementary and secondary schools have been designed and are currently being tested.

3.2. Participants

Against the aforementioned background, an AI education project was initiated by a school district in Beijing, China, in 2018. This project is led by a leading researcher in the field of computer sciences (CS), and 15 elementary schools within the aforementioned school district are participating in the project. In the aforementioned project, 25 CS teachers have worked together to develop an AI curriculum and a set of textbooks and materials and to implement the curriculum into their schools. From the participating schools, we used the purposive sampling method to target all 17 classes of 707 elementary students who are engaged in the AI course, since they are able to understand the survey. The invitations were distributed with the aid of the aforementioned CS teachers. The participants were fourth to sixth graders, and the average age of the participants was 9.95 years (SD = 1.08 years). A total of 57.2% of the participants were male students, and 42.8% of the participants were female students. The participants reported to have spent an average of 5.91 h (SD = 5.62 h) in AI learning and projects.

3.3. Instruments

To investigate students’ AI learning readiness and psychological well-being, a 4-point Likert scale was used as the instrument for data collection (1 = strongly disagree to 4 = strongly agree). The survey questionnaire consisted of two parts regarding students’ self-reported basic information (i.e., grade, gender, age, and AI learning hours) and responses to the psychological variables (i.e., readiness, confidence, anxiety, AI literacy, and relevance). All the items of the five psychological variables were adopted from a previously validated survey or developed according to the situated research context. The following open-ended item was developed: “Can you tell us your view about AI”?
The AI readiness construct comprised six items for measuring students’ views regarding current AI applications (e.g., “AI technology gives me more control over my daily life”). These items were adapted from Parasuraman’s [48] Technology Readiness Index (the original scale has 10 items; α = 0.78).
The confidence in AI was assessed using a modified version of the confidence scale of the Attention, Relevance, Confidence, and Satisfaction (ARCS) model developed by Song and Keller (five items; α = 0.70) [49] to measure students’ confidence when they were in a computer-assisted learning context (e.g., “I feel confident that I can learn basic concepts of AI in this AI class.”).
AI anxiety was assessed using a modified version of the Motivated Strategies for Learning Questionnaire (five items; α = 0.90) of Pintrich et al. [50] to measure students’ anxiety in AI learning (e.g., “When I consider the capabilities of AI, I think about how difficult my future will be.”).
The AI literacy scale was self-constructed to measure students’ basic understanding of AI knowledge and skills. The constructed scale has five items and was designed according to the schools’ AI curriculum content (e.g., “I can use AI-assisted image search tools.”).
The relevance of AI was assessed using a modified version of the relevance scale of the ARCS model developed by Song and Keller (six items; α = 0.73) [49] to measure the connections of AI (five items; e.g., “The things I am learning in this AI class will be useful to me.”).
The developed 26-item instrument was subjected to expert review to assess its content validity. Four scholars with over five research publications in the fields of curriculum evaluation and psychology were invited to review the draft questionnaire for identifying potential sources of nonexample errors and providing critical suggestions to minimize the errors. The draft questionnaire was revised and improved according to inputs from the expert reviewers. The revised questionnaire was shared with 12 teachers from the AI education project mentioned in Section 3.2 through a focus group interview. The teachers read the survey items one by one with a focus on the statement and language to ensure that these items delivered their intended messages to elementary students in understandable language. After obtaining the teachers’ feedback, the survey was further revised and finalized.

3.4. Data Collection and Analysis

The developed survey was administered through the Internet. At the conclusion of the AI courses, the teachers introduced the survey to their students. The teachers then invited the students to share their perspectives and experiences through the survey. In particular, the teachers stressed and explained the anonymity of the survey to eliminate any concerns of the students. In total, 549 students participated in the survey, and the response rate was 77.65%. There was no missing data among the 549 responses.
The 549 samples were randomly divided into two subsamples to conduct exploratory factor analysis (EFA; n = 220; 57.3% male students) and confirmatory factor analysis (CFA; n = 329; 57.1% male students). We followed the relevant guidelines about the sample size to ensure we met the minimum requirements in both analyses [51,52]. The guidelines proposed by Gorsuch [51] for EFA suggested a ratio of 5 participants per measured variable and a sample size of no less than 100. The first subsample comprising 220 student responses was analyzed through EFA to assess the validity and reliability of the measurement. EFA was conducted using the IBM Statistical Product and Service Solutions version 25. The other subsample comprising 329 student responses was used in CFA and SEM to test the proposed hypotheses. Regarding to sample size of CFA, Hair et al. [52] suggested a larger sample size, considering several influencing factors: the number of latent variables, the lowest number of indicators in a latent variable and communalities. The appropriate sample size of this study should be >300. The skewness (ranged from −0.357 to −2.232) and kurtosis (ranged from −1.590 to 4.649) of the items did not exceed the cutoffs of |3| and |8|, respectively [53], which indicated that the univariate normality was acceptable. The relationships among the factors were evaluated using various measures. Mardia’s coefficient was measured to ensure that the multivariate normality was acceptable [54]. The Mardia’s coefficient value obtained in this study was 452.643, which is less than the recommended value of p (p + 2) = 22(24) = 528 (where p is the number of observed variables). Thus, the requirement of multivariate normality was satisfied. Next, multigroup invariance analyses were conducted to compare gender differences by using AMOS 20.0. Vandenberg and Lance [55] recommended stringent steps to examine the following types of invariance across groups: configural invariance, metric variance, and scalar variance. After the invariance tests, latent mean analysis was conducted to compare the means of the latent variables for the male and female students. The male student group served as the reference group, and the latent means of this group were constrained to be 0. Thus, the latent means of the female student group represented mean differences. Finally, the students’ open-ended responses were coded using open coding [56,57,58]. Because most of the responses comprised only one sentence, each response was assigned one code (mostly an in vivo code; see Table 6). The qualitative responses were coded by two authors independently. One author coded all the data; the other author coded 30% of the full data. The inter-coder reliability was 0.92.

4. Results

4.1. Factor Analysis of the Survey

To validate the survey, EFA was performed with principal axis factoring analysis and the direct oblimin rotation method for clarifying the factor structure of the survey. The factor analysis results indicated that the measures of sampling adequacy were acceptable: Kaiser–Meyer–Olkin value was 0.908, Bartlett’s test of sphericity = 4260.094 (df = 231, p < 0.001), and that 71.98% of the 307 total variance was explained.
As shown in Table 1, a total of 22 items (overall α = 0.90) were grouped into the following factors: AI readiness, AI anxiety, AI literacy, relevance of AI, and confidence in AI. The reliability (α) coefficients of the aforementioned five factors were 0.92, 0.94, 0.90, 0.91, and 0.89, respectively, which suggested that the aforementioned five factors were sufficiently reliable for assessing the students’ AI learning. Four items were removed because of insufficient factor loadings or cross loadings.
The average variance extracted (AVE) and composite reliability (CR) were also computed. All the CR values exceeded 0.7, and the AVE values exceeded 0.5 [readiness (CR = 0.88, AVE = 0.59), anxiety (CR = 0.94, AVE = 0.81), AI literacy (CR = 0.83, AVE = 0.55), relevance (CR = 0.86, AVE = 0.55), and confidence (CR = 0.82, AVE = 0.54)].
Pearson’s correlations were computed for the first subsample to examine the relationships among AI readiness, confidence in AI, AI anxiety, AI literacy, and relevance of AI. Table 2 indicates that significant and positive relationships exist among AI readiness, confidence in AI, AI knowledge efficiency, and relevance of AI (from r = 0.59 to r = 0.70, p < 0.001), which suggests the existence of significant positive correlations among these factors. However, AI anxiety was not significantly correlated with AI readiness, AI literacy, relevance of AI, and confidence in AI (from r = −0.01 to r = −0.07, p > 0.05). This finding indicates that enhancing students’ confidence and AI literacy as well as promoting the relevance of AI may facilitate students’ AI readiness. The diagonal elements in the matrix in Table 2 are the square roots of the AVE values. The values of these elements are higher than those of its corresponding rows and columns; thus, the questionnaire has satisfactory discriminant validity.

4.2. SEM of the Students’ AI Learning

After a five-factor structure was identified through EFA, we examined the proposed hypotheses by using AMOS. CFA was conducted to confirm the contract validity and the structure of students’ AI learning. The model fitting results were as follows: χ2/df = 2.30, GFI = 0.89, TLI = 0.95, CFI = 0.96, and RMSEA = 0.063 [59].
The following fitness indices were obtained through SEM: χ2/df = 2.45, GFI = 0.87, TLI = 0.94, CFI = 0.94, and RMSEA = 0.067 [59]. Path analysis was used to examine the relationships among the aforementioned variables. As displayed in Figure 1, the students’ confidence in learning AI (b = 0.67, p < 0.001) and perception regarding the relevance of AI (b = 0.70, p < 0.001) can be inferred from their AI literacy. However, AI readiness could not be inferred from AI literacy directly (b = 0.02, p > 0.05). The students’ AI readiness can be inferred from their AI confidence (b = 0.46, p < 0.001). Moreover, students’ AI readiness can be inferred from their perception of AI relevance (b = 0.37, p < 0.001). However, the students’ AI readiness anxiety could not be inferred by their anxiety (b = −0.06, p > 0.05). The results obtained for the proposed hypotheses are presented in Table 3. The results indicated that the students’ perception of AI anxiety and AI literacy did not influence their AI readiness. Building students’ AI literacy enhances their AI confidence and perception of AI relevance, which in turn enhances their AI readiness.

4.3. Gender Differences

Multigroup SEM was conducted to assess the configural invariance by analyzing the male and female groups without constraining equality across the groups. As presented in Table 4, the results exhibited a suitable goodness of fit (χ2 = 876.605, df = 398, χ2/df = 2.20, CFI = 0.931, and RMSEA = 0.051) in the configural invariance test, which indicated that the structural patterns were similar across the two groups. Thus, it can be a baseline to compare in the metric invariance test. Next, a metric invariance test was conducted by constraining the factor loadings to be equal across the aforementioned groups. A good model fit (χ2 = 908.730, df = 415, χ2/df = 2.19, CFI = 0.929, and RMSEA = 0.051) was obtained in the aforementioned test. Finally, a scalar invariance test was conducted by constraining the intercepts across the groups to be invariant. A good model fit (χ2 = 954.292, df = 437, χ2/df = 2.18, CFI = 0.926, and RMSEA = 0.051) was obtained in the aforementioned test. Because the chi-square value is sensitive to the sample size, Cheung and Rensvold (2002) recommended that a CFI change smaller than 0.01 is used as a criterion for evaluating hypotheses.

Latent Mean Analysis

The latent mean of the male group was assumed to be 0 because this group was considered as a reference group for comparisons with the female group. For this assumption, the latent mean represented a good model fit (χ2 = 931.099, df = 432, χ2/df = 2.155, CFI = 0.926, and RMSEA = 0.051). Table 5 presents the results of the four latent variables with significant gender differences for the relevance of AI, AI literacy, and confidence in AI. The male students reported higher mean values than the female students for three factors: the relevance of AI, AI literacy, and confidence in AI. However, no significant mean differences were found across the gender groups for AI anxiety.

4.4. Analyses of Students’ Open-Ended Responses

All the students’ open-ended responses were coded as presented in Table 6 by using in vivo codes (i.e., using the students’ words) as far as possible. The comments were further categorized as positive, neutral, or negative to clearly indicate the students’ broad sentiment regarding AI.
Although a quarter of the students did not answer the open-ended questions, the majority of the responses were positive. In general, the students believed that AI is useful (20.4%), convenient (9.7%), good (10.9%), interesting (6.2%), and likable (5.6%). Some students perceived AI as an advanced technology (4.6%) that is getting better and stronger each day (1.7%). Moreover, certain students intend to learn more about AI (3.6%) and use it to help others (social; 1.6%). More than 60% of the students were positive about learning AI. The mixed response group (3.7%) appreciated the positive aspects of AI but had fears and worries about AI replacing jobs and taking control of the world. The fear group (3.8%) shared these concerns. Only around 1% of the students felt that AI was difficult to learn. Overall, the findings correspond well with the quantitative data. The mean scores of factors such as confidence, readiness, AI literacy, and relevance were positive (i.e., treating 2.5 as the mid-point for a four-point Likert scale). The mean score of anxiety indicated that the students were not anxious (i.e., <2.5; see Table 1). It may be necessary to address the common fear among the students that machines may take over the world. This sentiment is likely to be a popular misconception propagated by Hollywood movies rather than the science of AI.

5. Discussion and Conclusions

The well-being of any individual depends considerably on whether they can learn to adapt to technological changes that are changing the socio-economic landscape [28,30]. In the current context, the importance of fostering students’ readiness for an AI-infused future has been voiced by many educators [5,6,7]. To contribute to efforts in promoting AI literacy, this study formulated and validated a survey that examines students’ AI readiness and the factors associated with AI readiness for elementary school children. In addition, SEM was used to determine statistical predictive relationships. An open-ended item was analyzed to corroborate the research findings. The research findings are discussed in the following text.
Statistical analyses suggested that the survey instrument designed for assessing student readiness for AI was valid and reliable. The survey was determined to have a five-factor structure with divergent validity. Thus, the five factors were correlated but not highly correlated with each other. This research provides a valid survey to measure and examine the level of student readiness for AI as well as AI relevance, confidence, anxiety, and literacy in the context of an AI education program. The constructed survey may be used as an evaluation tool by teachers and educators for formative or summative purposes in assessing important psychological aspects of AI education programs. AI education programs must achieve relevance, reduce people’s anxiety toward AI, and promote learners’ confidence [10,41] to prepare psychologically well-adapted learners for an AI-infused future. AI readiness can be achieved by fostering AI literacy. Currently, multiple forms of digital literacy are gaining importance [30]. In this context, the present study contributes to the literature by investigating the emerging factor of AI literacy. The validated survey can help teachers better understand and monitor students’ learning as well as reflect on the design of the AI curriculum and the associated teaching effectiveness.
The structural equation model indicated that four out of the six proposed hypotheses were supported. AI literacy was not predictive of AI readiness. Rather, the influence of AI literacy was mediated by the students’ confidence and perception of AI relevance. The aforementioned findings are in agreement with the general theory of planned behavior [9] in that background factors such as knowledge (i.e., AI literacy) influence people’s behaviors through control beliefs (i.e., confidence) and their attitude toward the behavior (i.e., relevance). Thus, students’ perception of AI readiness is influenced by their confidence that they can learn and use AI knowledge as well as their assessment that AI knowledge is relevant to their lives. The aforementioned finding corresponds to that reported by Amit-Aharon et al. [32], who indicated that literacy-based efficacy can contribute to the readiness to adopt new practices that involve using the new knowledge. The structural model highlights the need to design a high-quality AI education program that helps students to understand the relevance of AI knowledge and enhances their confidence in AI learning. The importance of relevance and confidence has been repeatedly identified in Keller’s model of motivational design of instruction [10]. In the studied AI education program, examples on how AI is employed in people’s daily lives through smart phones and other computing devices and on the possibilities of using AI to solve problems such as health and traffic congestion can promote students’ AI relevance and confidence [31]. Surprisingly, the students’ AI readiness was not influenced by a reduction in their anxiety regarding AI and an enhancement in their AI literacy. This result was obtained possibly because the students are young and are not directly facing the threat of job loss due to AI [41]. The effects of the AI curriculum in building a strong and optimistic outlook should not be discounted. In the future, time-series studies can be performed to ascertain the effects of AI learning.
Gender differences [23,24,26] are frequently noted in technology- and engineering-related fields of study. The present study did not find significant gender differences in the students’ AI anxiety and AI literacy. The aforementioned finding indicates that both genders are not developing negative views regarding AI and are equipped with a similar literacy base for the AI education program. Nonetheless, the male students reported a higher confidence, relevance, and readiness for AI than the female students did. Gender differences emerged in the early age of AI education. This finding may be related to the traditional cultural outlook that male individuals are more suited than female individuals for engineering subjects. In particular, in Asian societies, where patriarchal values and social norms keep gender inequalities alive, the stereotype threat exists in STEM education [60,61]. Traditionally, engineering and technology have been male-dominated fields, and some people believe that men are mathematically superior and better suited to engineering jobs than women are [25,27]. Influenced by such an implicit bias against women, female students may be less confident than male students regarding their abilities, even with equal AI literacy. They might also face difficulties in relating to the male-dominated field due to the presence of limited female role models [62,63,64]. Due to this gender bias and self-efficacy, female students might perceive themselves as being less prepared or even not ready for an AI-infused future. Therefore, enhancing the student readiness for AI technology should not be limited to the school curriculum and classroom teaching. The entire society is obligated to take steps to forge a positive culture and send encouraging messages to female students for addressing gender equity issues in AI education. Ertl et al. [25] recommended using role models to encourage female students to reduce the effects of stereotyping.
The sentiments among the students, as reflected by the open-ended responses, indicated that the students were generally excited to learn about AI and viewed AI as a powerful and useful technology. Few students were fearful and anxious regarding AI. The student sentiments confirmed the quantitative findings. Overall, the questionnaire survey indicated that with an appropriate curriculum design, young students can be encouraged to learn about AI, which can help prepare them for an AI-infused world.
There are some limitations in this study. Firstly, for each latent variable, there should be at least 10–20 participants to optimize the statistical outputs. While our sample size met the minimum requirement as stated in the method section, both samples were near the lower boundary of the minimum number of participants. Future research should involve larger samples to address this issue. Secondly, the participating students in this study were relatively young and they may not understand the full implications of the AI technology. It is suggested that the study should be repeated with senior students.

Author Contributions

Conceptualization, Y.D. and C.-S.C.; methodology, Y.D. and C.-S.C.; software, C.-S.C. and P.-Y.L.; validation, Y.D., C.-S.C. and P.-Y.L.; formal analysis, C.-S.C. and P.-Y.L.; investigation, Y.D. and C.-S.C.; resources, Y.G. and J.Q.; data curation, Y.D., C.-S.C. and P.-Y.L.; writing—original draft preparation, Y.D. and P.-Y.L.; writing—review and editing, Y.D., C.-S.C., P.-Y.L. and M.S.-Y.J.; visualization, C.-S.C. and P.-Y.L.; supervision, Y.D.; project administration, Y.D.; funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The items for technology readiness in this study were adapted from the Technology Readiness Index, which has been copyrighted by A. Parasuraman and Rockbridge Associates, Inc. 2000. The Technology Readiness Index may be duplicated only with written permission from the original authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cope, B.; Kalantzis, M.; Searsmith, D. Artificial Intelligence for Education: Knowledge and Its Assessment in AI-Enabled Learning Ecologies. Educ. Philos. Theory 2020, 1–17. [Google Scholar] [CrossRef]
  2. Makridakis, S. The forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms. Futures 2017, 90, 46–60. [Google Scholar] [CrossRef]
  3. Woolf, B.; Lane, H.; Chaudhri, V.; Kolodner, J. AI Grand Challenges for Education. AI Mag. 2013, 34, 66–84. [Google Scholar] [CrossRef] [Green Version]
  4. UN General Assembly. Transforming Our World: The 2030 Agenda for Sustainable Development; United Nations Resolution A/RES/70/1; UN General Assembly: New York, NY, USA, 2015. [Google Scholar]
  5. Gherhes, V.; Obrad, C. Technical and humanities students’ perspectives on the development and sustainability of Artificial Intelligence (AI). Sustainability 2018, 10, 3066. [Google Scholar] [CrossRef] [Green Version]
  6. Chai, C.S.; Lin, P.-Y.; Jong, M.S.Y.; Dai, Y.; Chiu, T.K.F.; Qin, J.J. Perceptions of and behavioral intentions towards learning artificial intelligence in primary school students. Educ. Technol. Soc. 2020, in press. [Google Scholar]
  7. Chiu, T.K.F.; Chai, C.S. Sustainable Curriculum Planning for Artificial Intelligence Education: A Self-Determination theory Perspective. Sustainability 2020, 12, 5568. [Google Scholar] [CrossRef]
  8. Knox, J. Artificial Intelligence and Education in China. Learn. Media Technol. 2020, 1–14. [Google Scholar] [CrossRef]
  9. Elliott, A. The Culture of AI; Routledge: London, UK, 2019. [Google Scholar]
  10. Tomlinson, C.; Brighton, C.; Hertberg, H.; Callahan, C.; Moon, T.; Brimijoin, K.; Conover, L.; Reynolds, T. Differentiating Instruction in Response To Student Readiness, Interest, and Learning Profile in Academically Diverse Classrooms: A Review of Literature. J. Educ. Gift. 2003, 27, 119–145. [Google Scholar] [CrossRef] [Green Version]
  11. Ajzen, I. The theory of planned behavior. In Handbook of Theories of Social Psychology; Van Lange, P., Kruglanski, A., Higgins, E., Eds.; Sage: London, UK, 2012; pp. 438–459. [Google Scholar]
  12. Keller, J. Motivational Design for Learning and Performance: The ARCS Model Approach; Springer: Boston, MA, USA, 2010. [Google Scholar]
  13. Parasuraman, A.; Colby, C. An Updated and Streamlined Technology Readiness Index. J. Serv. Res. 2015, 18, 59–74. [Google Scholar] [CrossRef]
  14. Smith, P. Learning Preferences and Readiness for online Learning. Educ. Psychol. 2005, 25, 3–12. [Google Scholar] [CrossRef] [Green Version]
  15. Hussin, S.; Radzi Manap, M.; Amir, Z.; Krish, P. Mobile Learning Readiness among Malaysian Students at Higher Learning Institutes. Asian Soc. Sci. 2012, 8, 276–283. [Google Scholar] [CrossRef] [Green Version]
  16. Peterson, C.; Casillas, A.; Robbins, S. The Student Readiness Inventory and the Big Five: Examining Social Desirability and College Academic Performance. Personal. Individ. Differ. 2006, 41, 663–673. [Google Scholar] [CrossRef]
  17. Xiong, Y.; So, H.; Toh, Y. Assessing Learners’ Perceived Readiness for Computer-Supported Collaborative Learning (CSCL): A Study on Initial Development and Validation. J. Comput. High. Educ. 2015, 27, 215–239. [Google Scholar] [CrossRef]
  18. Dray, B.; Lowenthal, P.; Miszkiewicz, M.; Ruiz-Primo, M.; Marczynski, K. Developing an Instrument to Assess Student Readiness for online Learning: A Validation Study. Distance Educ. 2011, 32, 29–47. [Google Scholar] [CrossRef]
  19. Pillay, H.; Irving, K.; Tones, M. Validation of the Diagnostic Tool for Assessing Tertiary Students’ Readiness for online Learning. High. Educ. Res. Dev. 2007, 26, 217–234. [Google Scholar] [CrossRef]
  20. Jong, M.S.Y.; Lee, J.H.M.; Shang, J.J. Educational use of computer game: Where we are and what’s next. In Reshaping Learning: Frontiers of Learning Technology in a Global Context; Huang, R., Kinshuk, Spector, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 299–320. [Google Scholar]
  21. Chai, C.S.; Liang, J.-C.; Tsai, C.-C.; Dong, Y. Surveying and Modelling China High School Students’ Experience of and Preferences for Twenty-First-Century Learning and their Academic and Knowledge Creation Efficacy. Educ. Stud. 2019, 1–18. [Google Scholar] [CrossRef]
  22. Soulé, H.; Warrick, T. Defining 21st Century Readiness for All Students: What We Know and How to Get there. Psychol. Aesthet. Creat. Arts 2015, 9, 178–186. [Google Scholar] [CrossRef]
  23. Chai, C.S.; Lin, P.-Y.; Jong, M.S.; Dai, Y.; Chiu, T.K.; Qin, J. Primary School Students’ Perceptions and Behavioral Intentions of Learning Artificial Intelligence. Educ. Technol. Soc 2020, in press. [Google Scholar]
  24. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic Review of Research on Artificial Intelligence Applications in Higher Education—Where Are the Educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef] [Green Version]
  25. Young, B.J. Gender Differences in Student attitudes toward Computers. J. Res. Comput. Educ. 2000, 33, 204–216. [Google Scholar] [CrossRef]
  26. Shivers-Blackwell, S.L.; Charles, A.C. Ready, Set, Go: Examining Student Readiness to Use ERP Technology. J. Manag. Dev. 2006, 25, 795–805. [Google Scholar] [CrossRef]
  27. Ertl, B.; Luttenberger, S.; Paechter, M. The Impact of Gender Stereotypes on the Self-Concept of Female Students in STEM Subjects with an Under-Representation of Females. Front. Psychol. 2017, 8, 703. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Lee, M.-H.; Chai, C.S.; Hong, H.-Y. STEM Education in Asia Pacific: Challenges and Development. Asia-Pac. Educ. Res. 2019, 28, 1–4. [Google Scholar] [CrossRef] [Green Version]
  29. Leslie, S.-J.; Cimpian, A.; Meyer, M.; Freeland, E. Expectations of Brilliance Underlie Gender Distributions across Academic Disciplines. Science 2015, 347, 262–265. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Seldon, A.; Abidoye, O. The Fourth Education Revolution: Will Artificial Intelligence Liberate or Infantalise Humanity; The University of Buckingham Press: Angleterre, UK, 2018. [Google Scholar]
  31. Hung, M.-L.; Chou, C.; Chen, C.-H.; Own, Z.-Y. Learner Readiness for online Learning: Scale Development and Student Perceptions. Comput. Educ. 2010, 55, 1080–1090. [Google Scholar] [CrossRef]
  32. Choi, J.R.; Straubhaar, J.; Skouras, M.; Park, S.; Santillana, M.; Strover, S. Techno-Capital: Theorizing Media and Information Literacy through Information Technology Capabilities. New Media Soc. 2020. [Google Scholar] [CrossRef]
  33. Qin, J.J.; Ma, F.G.; Guo, Y.M. Foundations of Artificial Intelligence for Primary School; Popular Science Press: Beijing, China, 2019. [Google Scholar]
  34. Amit-Aharon, A.; Melnikov, S.; Warshawski, S. The Effect of Evidence-Based Practice Perception, Information Literacy Self-Efficacy, and Academic Motivation on Nursing Students Future Implementation of Evidence-Based Practice. J. Prof. Nurs. 2020. [Google Scholar] [CrossRef]
  35. Smith, P.J.; Murphy, K.L.; Mahoney, S.E. Towards Identifying Factors Underlying Readiness for online Learning: An Exploratory Study. Distance Educ. 2003, 24, 57–67. [Google Scholar] [CrossRef] [Green Version]
  36. Ryan, R.M.; Deci, E.L. Promoting self-determined school engagement. In Handbook of Motivation at School; Wentzel, K.R., Wigfield, A., Eds.; Routledge: New York, NY, USA, 2009; pp. 171–195. [Google Scholar]
  37. Komarraju, M.; Ramsey, A.; Rinella, V. Cognitive and Non-Cognitive Predictors of College Readiness and Performance: Role of Academic Discipline. Learn. Individ. Differ. 2013, 24, 103–109. [Google Scholar] [CrossRef]
  38. Kember, D.; Ho, A.; Hong, C. The Importance of Establishing Relevance in Motivating Student Learning. Act. Learn. High. Educ. 2008, 9, 249–263. [Google Scholar] [CrossRef]
  39. Keller, J.M. Motivational design of instruction. In Instructional-Design Theories and Models: An Overview of Their Current Status; Reigeluth, C.M., Ed.; Erlbaum: Hillsdale, NJ, USA, 1983; pp. 383–434. [Google Scholar]
  40. Keller, J.M. Motivational Design and Multimedia: Beyond the Novelty Effect. Strateg. Hum. Resour. Dev. Rev. 1997, 1, 188–203. [Google Scholar]
  41. Keller, J.M. Motivation in Cyber Learning Environments. Int. J. Educ. Technol. 1999, 1, 7–30. [Google Scholar]
  42. Jong, M.S.Y. Promoting elementary pupils’ learning motivation in environmental education with mobile inquiry-oriented ambience-aware fieldwork. Int. J. Environ. Res. Public Health 2020, 17, 2504. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Wang, Y.-Y.; Wang, Y.-S. Development and Validation of an Artificial Intelligence Anxiety Scale: An Initial Application in Predicting Motivated Learning Behavior. Interact. Learn. Environ. 2019, 1–16. [Google Scholar] [CrossRef] [Green Version]
  44. Igbaria, M.; Parasuraman, S. A Path Analytic Study of Individual Characteristics, Computer Anxiety and attitudes toward Microcomputers. J. Manag. 1989, 15, 373–388. [Google Scholar] [CrossRef]
  45. Beckers, J.; Schmidt, H. the Structure of Computer Anxiety: A Six-Factor Model. Comput. Hum. Behav. 2001, 17, 35–49. [Google Scholar] [CrossRef]
  46. Saadé, R.G.; Kira, D. Mediating the Impact of Technology Usage on Perceived Ease of Use by Anxiety. Comput. Educ. 2007, 49, 1189–1204. [Google Scholar] [CrossRef]
  47. New Generation Artificial Intelligence Development Plan. Available online: http://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm (accessed on 6 July 2020).
  48. Parasuraman, A. Technology Readiness Index (Tri). J. Serv. Res. 2000, 2, 307–320. [Google Scholar] [CrossRef]
  49. Song, S.H.; Keller, J.M. The Arcs Model for Developing Motivationally-Adaptive Computer-Assisted Instruction. In Proceedings of the Selected Research and Development Papers Presented at the National Convention of the Association for Educational Communications and Technology, Houston, TX, USA, 10–14 February 1999. [Google Scholar]
  50. Pintrich, P.R.; Smith, D.; Garcia, T.; McKeachie, W. A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ); The University of Michigan: Ann Arbor, MI, USA, 1991. [Google Scholar]
  51. Gorsuch, R.L. Factor Analysis, 2nd ed.; Erlbaum: Hillsdale, NJ, USA, 1983. [Google Scholar]
  52. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E.; Tatham, R.L. Multivariate Data Analysis, 6th ed.; Pearson Prentice Hall: Upper Saddle, NJ, USA, 2009. [Google Scholar]
  53. Kline, R.B. Principles and Practice of Structural Equation Modeling, 3rd ed.; Guilford Press: New York, NY, USA, 2011. [Google Scholar]
  54. Raykov, T.; Marcoulides, G.A. An Introduction to Applied Multivariate Analysis; Taylor and Francis: Hoboken, NJ, USA, 2008. [Google Scholar]
  55. Vandenberg, R.J.; Lance, C.E. A Review and Synthesis of the Measurement Invariance Literature: Suggestions, Practices, and Recommendations for Organizational Research. Organ. Res. Methods 2000, 3, 4–70. [Google Scholar] [CrossRef]
  56. Strauss, A.L.; Corbin, J.M. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 1998. [Google Scholar]
  57. Jong, M.S.Y.; Shang, J.J.; Lee, F.L.; Lee, J.H.M.; Law, H.Y. An exploratory study on teachers’ perceptions of game-based situated learning. In Learning by Effective Utilization of Technologies: Facilitating Intercultural; Mizoguchi, R., Dillenbourg, P., Zhu, Z., Eds.; IOS Press: Amsterdam, The Netherlands, 2006; pp. 525–532. [Google Scholar]
  58. Jong, M.S.Y.; Shang, J.J.; Lee, F.L.; Lee, J.H.M. An evaluative study on VISOLE—Virtual Interactive Student-Oriented Learning Environment. IEEE Trans. Learn. Technol. 2010, 3, 307–318. [Google Scholar] [CrossRef]
  59. Hair, J.F., Jr.; Black, W.C.; Babin, B.J.; Anderson, R.E.; Tatham, R.L. Multivariate Data Analysis: A Global Perspective; Pearson Education: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
  60. Charles, M.; Bradley, K. Indulging Our Gendered Selves? Sex Segregation by Field of Study in 44 Countries. Am. J. Sociol. 2009, 114, 924–976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Sechiyama, K. Patriarchy in East Asia: A Comparative Sociology Of Gender; Global Oriental: Leiden, The Netherlands, 2013. [Google Scholar]
  62. Weber, K. Gender Differences in Interest, Perceived Personal Capacity, and Participation in STEM-Related Activities. J. Technol. Educ. 2012, 24, 18–33. [Google Scholar] [CrossRef]
  63. Wu, L.; Jing, W. Asian Women in STEM Careers: An Invisible Minority in a Double Bind. Issues Sci. Technol. 2011, 28, 82–87. [Google Scholar]
  64. So, J.J.; Jong, M.S.Y.; Liu, C.C. Computational thinking education in the Asian Pacific region. Asia Pac. Educ. Res. 2020, 29, 1–8. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Structural model of the measured variables.
Figure 1. Structural model of the measured variables.
Sustainability 12 06597 g001
Table 1. Factor loadings and Cronbach’s α values for the five factors (n = 220).
Table 1. Factor loadings and Cronbach’s α values for the five factors (n = 220).
ItemsFactor Loading
AI Readiness, M = 3.55, SD = 0.64, α = 0.92
RE20.87
RE10.87
RE30.77
RE40.70
RE60.61
AI anxiety, M = 2.27, SD= 1.12, α = 0.94
A10.94
A30.91
A20.90
A50.84
AI Literacy, M = 3.54, SD = 0.64, α = 0.90
AL10.81
AL20.73
AL30.72
AL50.71
Relevance of AI, M = 3.49, SD = 0.66, α = 0.91
R50.82
R40.79
R20.75
R10.67
R30.66
Confidence in AI, M = 3.48, SD = 0.64, α = 0.89
C10.86
C30.74
C20.68
C40.66
Table 2. Correlations among the measured variables (n = 220).
Table 2. Correlations among the measured variables (n = 220).
12345
1. Readiness(0.77)
2. Anxiety−0.06(0.9)
3. AI literacy0.59 ***−0.01(0.74)
4. Relevance0.70 ***−0.060.67 ***(0.74)
5. Confidence0.69 ***−0.070.62 ***0.68 ***(0.73)
1. *** p < 0.001, ** p < 0.01. 2. The diagonal elements represent the square roots of the average variance extracted (AVE) values, and the off-diagonal elements represent the correlation estimates.
Table 3. Path coefficients of the measurement model.
Table 3. Path coefficients of the measurement model.
HypothesesPathEstimateStandardized WeightCritical RatioHypotheses Supported?
H1AI literacy → Readiness0.02−0.03−0.29No
H2AI Literacy → Confidence0.670.7011.92 ***Yes
H3Confidence → Readiness0.350.466.63 ***Yes
H4AI Literacy → Relevance0.700.8314.12 ***Yes
H5Relevance → readiness0.370.424.34 ***Yes
H6Anxiety → Readiness−0.02−0.06−1.45No
1. *** p < 0.001, **; p < 0.01.
Table 4. Model fit indices obtained for the invariance test.
Table 4. Model fit indices obtained for the invariance test.
χ2Dfχ2/dfp-ValueCFIΔCFIRMSEA
Configural invariance876.6053982.20 0.931 0.051
Metric invariance908.7304152.190.0150.9290.0020.051
Scalar invariance954.2924372.180.0000.9260.0030.051
Table 5. Results of difference comparison.
Table 5. Results of difference comparison.
Differences of Latent MeanC.R.
Relevance−0.15−2.73 **
Anxiety−0.04−0.39
AI literacy−0.06−1.12
Confidence−0.22−3.51 ***
Readiness−0.21−3.99 ***
1. *** p < 0.001; ** p < 0.01.
Table 6. Codes and frequency distribution.
Table 6. Codes and frequency distribution.
CategoriesCodes Meaning of Codes with the Example of Students’ Written ResponsesFrequencyPercentage
Not applicable NILNo comments14526.4%
PositiveAdvanced AI is advanced technology254.6%
“Artificial intelligence is powerful”
BetterAI is getting better91.6%
“I think AI will be better in the future.”
IntendIntend to learn more203.6%
“AI is very interesting and I will continue to learn AI.”
Social benefitsUse AI to promote social benefits91.6%
“Serve others better with artificial intelligence”
ConvenienceAI is convenience539.7%
“Good for us is convenience and simplicity”
Useful AI is useful11220.4%
“AI can help us to solve difficulties encountered in life”
GoodAI is good/great6010.9%
“AI is really great”
InterestingAI is interesting346.2%
“Artificial intelligence is fun”
LikeI like AI315.6%
“I love artificial intelligence”
NeutralMixedMixed responses (like and fear)203.6%
“Artificial intelligence can help humans, but in the future, artificial intelligence may dominate humans.”
ChangeAI is changing the world50.9%
“Artificial intelligence will change the world”
NegativeDifficultDifficulty in understanding61.1%
“I think artificial intelligence is good, but hard to learn”
FearFear of AI203.6%
“Artificial intelligence will cause many people to lose their jobs and even human extinction which will be very bad”

Share and Cite

MDPI and ACS Style

Dai, Y.; Chai, C.-S.; Lin, P.-Y.; Jong, M.S.-Y.; Guo, Y.; Qin, J. Promoting Students’ Well-Being by Developing Their Readiness for the Artificial Intelligence Age. Sustainability 2020, 12, 6597. https://doi.org/10.3390/su12166597

AMA Style

Dai Y, Chai C-S, Lin P-Y, Jong MS-Y, Guo Y, Qin J. Promoting Students’ Well-Being by Developing Their Readiness for the Artificial Intelligence Age. Sustainability. 2020; 12(16):6597. https://doi.org/10.3390/su12166597

Chicago/Turabian Style

Dai, Yun, Ching-Sing Chai, Pei-Yi Lin, Morris Siu-Yung Jong, Yanmei Guo, and Jianjun Qin. 2020. "Promoting Students’ Well-Being by Developing Their Readiness for the Artificial Intelligence Age" Sustainability 12, no. 16: 6597. https://doi.org/10.3390/su12166597

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop