Next Article in Journal
WINFRA: A Web-Based Platform for Semantic Data Retrieval and Data Analytics
Next Article in Special Issue
Development of Intelligent Information System for Digital Cultural Contents
Previous Article in Journal
Cognitive Emotional Embedded Representations of Text to Predict Suicidal Ideation and Psychiatric Symptoms
Previous Article in Special Issue
Automatic Judgement of Online Video Watching: I Know Whether or Not You Watched
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Extended Theory of Planned Behavior for the Modelling of Chinese Secondary School Students’ Intention to Learn Artificial Intelligence

1
Department of Curriculum and Instruction, The Chinese University of Hong Kong, Hong Kong, China
2
College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266580, China
3
Faculty of Education, Beijing Normal University, Beijing 100875, China
*
Authors to whom correspondence should be addressed.
Mathematics 2020, 8(11), 2089; https://doi.org/10.3390/math8112089
Submission received: 14 October 2020 / Revised: 12 November 2020 / Accepted: 20 November 2020 / Published: 23 November 2020
(This article belongs to the Special Issue Artificial Intelligence in Education)

Abstract

:
Artificial Intelligence (AI) is currently changing how people live and work. Its importance has prompted educators to begin teaching AI in secondary schools. This study examined how Chinese secondary school students’ intention to learn AI were associated with eight other relevant psychological factors. Five hundred and forty-five secondary school students who have completed at least one cycle of AI course were recruited to participate in this study. Based on the theory of planned behavior, the students’ AI literacy, subjective norms, and anxiety were identified as background factors. These background factors were hypothesized to influence the students’ attitudes towards AI, their perceived behavioral control, and their intention to learn AI. To provide more nuanced understanding, the students’ attitude towards AI was further delineated as constituted by their perception of the usefulness of AI, the potential of AI technology to promote social good, and their attitude towards using AI technology. Similarly, the perceived behavioral control was operationalized as students’ confidence towards learning AI knowledge and optimistic outlook of an AI infused world. Relationships between the factors were theoretically illustrated as a model that depicts how students’ intention to learn AI was constituted. Two research questions were then formulated. Confirmatory factor analysis was employed to validate that multi-factor survey, followed by structural equational modelling to ascertain the significant associations between the factors. The confirmatory factor analysis supports the construct validity of the questionnaire. Twenty-five out of the thirty-three hypotheses were supported through structural equation modelling. The model helps researchers and educators to understand the factors that shape students’ intention to learn AI. These factors should be considered for the design of AI curriculum.

1. Introduction

The rapid developments of Artificial intelligence (AI) are changing the world to one that needs people to work with AI and live with AI. This trend has prompted educational responses that consider issues about teaching by/with AI and learning with/about AI. The current review indicates that studies of AI in education are focusing more on system developments such as intelligent tutoring systems, and the majority of current studies are focused on higher education [1]. Nonetheless, learning about AI technology (e.g., the concept of machine learning) and learning with the support of AI technology (e.g., using intelligent tutoring system) among K-12 learners has started to receive some attention, especially in China [2,3]. Given that AI is currently touted as the key element for the fourth industrial revolution which is purportedly also pushing the fourth revolution in education [4,5]; learning AI as a subject matter becomes an important topic that K-12 educators need to consider. Within this context, fostering students’ intention to learn AI would be an important curriculum goal. A well-designed curriculum usually fosters students’ intention to pursue the subject matter further. Thus, investigating students’ intention to learn as an indicator of the effect of the curriculum may ensure that students will continuously learn more about AI. This has become a goal of some contemporary education systems as AI technology will impact today’s students’ personal and professional lives. Earlier studies have explored several factors associated with elementary students’ intention to learn AI such as their perception of using AI to promote social good and their self-efficacy in learning AI [2]. This study extends the earlier study by including more variables undergirded by theory of planned behavior [6,7] and the associated technology acceptance model [8] among a group of secondary students in China who have received extensive training in AI and completed AI projects. It contributes to the current effort to integrate AI into K-12 curriculum and a refined theoretical model to examine secondary students’ intention to learn AI.

2. Literature Review

Fishbein and Azjen [7] (p. 20) viewed human actions as reasoned behavior determined largely by their behavioral intention formed through “the information or beliefs people possess about the behavior under consideration”. According to the theory of planned behavior, the categories of reasons include people’s attitudes towards the behavior, subjective norms, and their perceived behavioral control [6]. These factors are in turn influenced by people’s background such as personal characteristics (e.g., gender and age), sociocultural backgrounds (e.g., cultural values), and education experiences (e.g., pedagogical approaches) [7,9]. The general findings reported in numerous studies employing the theory of planned behavior indicate that when an action is perceived to (a) produce positive results, (b) be socially well accepted (i.e., conforming to subjective norms), and (c) be perceived as within one’s locus of control, strong behavioral intention is likely to be formed, and this is predictive of actual behavior [10]. Nonetheless, based on the targeted behavior or actions being examined, specific background and attitudinal factors may vary. The factors that could influence students’ behavioral intention to learn AI are reviewed in the following sections.

2.1. Background Factors

It seems obvious that whether or not one intends or decides to continue to study a subject matter would require one to have basic knowledge about the subject matter. In the case of technology related subject matter, basic knowledge would constitute knowing at least conceptually how that technology works and what are the problems that it can solve [11,12]. In general, self-reported technological literacy has been demonstrated to predicts learners’ effort to engage in technology-supported learning [8,13,14], which implies positive attitudes towards using the technology. Earlier studies on student learning about AI have thus formulated a factor such as AI literacy [2,15] as a necessary background factor to study student’s behavioral intention. The basic knowledge that a person has is also foundational to his/her understanding of how useful the technology is (i.e., perceived usefulness) and whether the technology could be used to promote social good [2]. AI literacy has thus been reported to predict students’ confidence, perceived relevance of learning AI and readiness towards AI [15].
Other than the knowledge factor, social influence, such as how those significant others in ones’ life see things, also exerts much influence on one’s attitudes towards behavior and behavioral intention [7]. Social influence could be relegated from an instructional perspective as a background factor because it is in general not within anyone’s control. It has been shown to be a precedent factor that shapes learners’ attitudes towards the behavior and behavioral intention to learn in the context of technology-enhanced learning [16,17]. Social influence is usually referred to as subjective norms in theory of planned behavior, and it denotes how one perceives socially important others such as parents, friends, and teachers may expect one to act. Sohn and Kwon [18] reported that social influence represented in theoretical models such as theory of planned behavior and the Unified Theory of Acceptance and Use of Technology (UTAUT) has significant effects on the participants’ intention to use AI products. In the education context. while AI enabled applications such as voice and image recognition are widely used today, it is not clear if this has spurred teachers or parents to encourage or even demand students to learn about AI [2]. Although China has formulated initiative for schools to start teaching AI as a curriculum topic [3], large scale teacher professional development for the teaching of AI, for educating students about AI, is apparently lacking [1]. Parent education of AI is also unheard of. It is thus timely to survey how social influence is manifested among secondary students for the learning of AI.
Anxiety is a common feeling associated with the rapid development of technology. Specifically towards AI, which aims to be human-like, a general sense of fear could be easily formed [15,19]. Technological advancements have always been disruptive, and they are usually associated with the demise of some jobs as they give rise to new careers. Previous research has indicated that people–technology interactions are marked with discomfort and uncertainty [20]. Johnson and Verdicchio [21] identify “AI anxiety” as feelings of fear towards AI that have begun to spread socially via science fiction and films and public talks by influential technology tycoons such as Bill Gates and Elon Musk. This form of anxiety can be traced back to technophobia and computer anxiety [19], and it is mostly unfounded fear that denotes a social climate [21]. Earlier research indicated that anxiety may not influence children’s sense of readiness, and they are self-reportedly not anxious about but optimistic towards AI [15]. However, whether the perception of primary school children can be generalized to other older students needs verification. It seems logical to hypothesize that when anxiety is sufficiently strong, it will be negatively associated with one’s attitudes towards the behavior, perceived behavioral control and behavioral intention.
Based on the general proposed directions of influence of the theory of planned behavior [7] and the findings of research from previous studies among primary school students who are learning AI [2,15], hypotheses H1-H18 were formulated. Generally, the hypotheses indicate that the background factors will significantly influence all factors downstream which could be categorized as attitudes towards the behavior (i.e., social good, perceived usefulness, attitude towards use), perceived behavioral control (i.e., confidence and optimism), and subsequently behavioral intention. In addition, the influences of anxiety are hypothesized as negatively significant.

2.2. Attitude Towards Behavior

Attitude towards behavior refers to an individual’s expectations and feelings about the possible consequences of undertaking an action [22]. If one expects to gain from the action, the attitude toward the action is logically positive. In dealing with emerging technology, namely the electronic mail thirty years ago, Davis [23] constructed the technology acceptance model based on the theory of planned behavior. Perceived usefulness is a factor that research related to technology acceptance model commonly employed as an antecedent of the general attitude towards use [23,24]. Perceived usefulness reflects a positive attitude towards technology by ascertaining that the use of technology increases one’s productivity and performance. In addition, with reference to AI, promoting social good has been emphasized by AI technologists [25,26]. Understanding that AI could be used to promote social goods, such as facilitating cancer detection and helping visually impaired people to see, is likely to influence students’ attitude towards use. Promoting social good is congruent to the universal and everlasting aim of education which emphasizes the importance of cultivating moral values [27]. Pedagogically, promoting social good has been incorporated into the recent AI textbooks available in China [28,29]. Applying learned knowledge to help less fortunate people is a value that is constantly promoted in the Confucius influenced culture [30], in which this study is situated. More importantly, a recent study has revealed that Chinese students’ AI literacy does not predict their intention to learn directly but through the factor of social good [2]. In other words, perceived usefulness should be studied together with social good, and perceived usefulness could influence social good with the students’ notion of how to build AI applications for social good based on their perception of usefulness of AI. Both perceived usefulness and social good could influence students’ attitude towards their use of AI technology.
Attitude towards use refers to students’ positive experiences using the technology [31]. Users’ positive attitude could include enjoying interaction with the technology and experiencing fun, which many AI application designers strive to provide. These factors (perceived usefulness, social good, attitude towards use) account for the different aspects of attitude towards AI that learners could form after attending the AI curriculum. In the technology acceptance model literature, perceived usefulness often predicts attitude towards use and user intention to use technology [32,33,34]. A previous study has reported that social good predicts behavioral intention [2]. The literature thus provides general support for the six hypothesized relationships between these factors (perceived usefulness, social good and attitude towards use) and behavioral intention, and they are listed as H19, H20, H22, H24, H27, and H29.

2.3. Perceived Behavioral Control

Perceived behavioral control refers to one’s assessment about whether the behavior under consideration is within one’s capability to execute and to what extent it can be carried out [35]. Ajzen [35] acknowledge that perceived behavioral control is closely associated to the concept of self-efficacy. In practice, perceived behavioral control is commonly measured as self-efficacy or confidence [7]. Between self-efficacy and confidence, Zhang and colleagues [36] observed that it is common that researchers choose either self-efficacy or confidence for examination. Repeated findings point to confidence as a positively influence of students’ continuous intention to learn [37,38] (i.e., H32).
While self-efficacy or confidence usually indicates one’s assessment of being able to perform an action, it is not conceptually equivalent to having the control to execute the action [35,39]. A learner could be confident in learning some topics, but s/he may lack control over the environment to create conducive conditions for learning. Confidence in learning AI has been reported to positively influence students’ readiness [2]. Readiness in the scale of Technology Readiness [20] is partly constituted by the optimistic outlook that one could control and use technology flexibly and effectively to improve one’s life. Psychologically, optimism denotes the positive expectancies that one will be successful in the future [40]. Optimism towards AI would be an incompatible position if one sees AI as a technology that would usher in uncontrollable future. Thus, optimism could be used as a proxy variable in the theory of planned behavior to represent the control belief in the context of learning AI. Logically, confidence should positively influence optimism (i.e., H31), and optimism would predict behavioral intention (i.e., H33).
Within the technology acceptance model literature, perceived behavioral control is usually denoted as perceived ease of use, and it commonly predicts the attitude towards use [32]. On the contrary, technologies that are perceived as very difficult to use are usually be associated with negative users’ attitude towards its use. This direction of influence may apply to human-AI relations. While it has been reported that a student’s AI knowledge informs her confidence and shapes the attitude [15,41], the direction of influence between perceived behavioral control and attitude towards use could be contextual. In the context of learning, it seems plausible that the relationships could be reversed between the attitudes towards the behavior and the perceived behavioral control before and after one acquires substantial knowledge associated with the action. At the initial stage of knowledge acquisition, students’ confidence may influence their attitudes towards learning [15]. As they gain strong knowledge and have better understanding about how to make the knowledge useful, the associated attitudes towards the behavior may enhance their confidence and optimism (i.e., H21, 25, 27, 28, 30). In theory of planned behavior literature, these are less explored relationships that call for attention [2]. Therefore, we hypothesized relationships between attitudes towards the behavior factors (perceived usefulness, social good, and attitude towards use) and perceived behavioral control factors (Confidence and Optimism), and they are listed as H21, H23, H25, H26, H28, and H30.
Summarizing from the above, an extended theory of planned behavior model of secondary students’ behavioral intention to learn AI surveyed after they have substantial experiences in learning AI is formulated and depicted in Figure 1 below. The research questions formulated were the following: (1) Can the 9-factor survey for secondary school students’ intention to learn AI be validated through confirmatory factor analysis? (2) Are the hypothesized relationships (H1-H33) supported?

3. Method

3.1. Contexts and Participants

Purposeful sampling was adopted to enroll secondary school students (N = 545, 56.33% male) who had experiences in learning about AI in China. The criteria employed was to choose secondary school students who have attended at least an AI course that encompasses key concepts such as machine learning, deep learning, visual recognition etc. This sampling strategy was adopted so as to ensure that the participants have the relevant knowledge to respond to the items. Of these students, 37.43% were from Zhengzhou, 33.76% were from Qingdao, and 28.80% were from Shanghai. The age of these students ranged from 13 to 18 years (M = 14, SD = 1.30).
All the students have enrolled in an afterschool program provided by one of the researchers which aimed to cultivate students’ understanding of AI knowledge and skills to apply AI to solving real-world problems. The course was delivered once a week (1h) and lasted 32 classes in two semesters. All the students had more than 30 h of AI learning experience, of which 47.89% had more than 100 h of AI learning experience. Students learned basic AI concepts, history of AI development, and future prospects of AI. In addition, students learned how to assemble pre-coded algorithms into useful programs to execute actions, operate various controllers and sensors in conjunction with the algorithms which could be used in different actions. The main platform includes alpha dog robot, and the algorithms are mainly for recognition of physical properties such as temperature, voice, images, etc. Students were also required to participate in inquiry activities to explore the use of six to eight kinds of sensors, such as indoor temperature and humidity monitoring, face recognition, and break-in alarm. As the algorithm for machine learning involves calculus and other mathematical operations that are beyond the junior secondary students’ mathematical knowledge, students were not required to use statistical knowledge or mathematical equations in the course. The mathematical knowledge students used was more about applying what they have already learned in the context of solving the design challenge such as specifying positional coordinates for start point and/or end point; employing 0 or 1 decision to switch on and off devices depending on the task and simple coding and debugging for the tasks. To provide more details for the teaching, how students’ knowledge that speed = distant/time is being used in part of a project task is provided in the Appendix A.
At the end of the whole course, students were invited by teacher and volunteered to answer an online survey. It took approximately 10 to 12 min for students to complete the questionnaire. Students were asked to respond to each of the items by choosing the option which aligned with their level of agreement most.

3.2. Instruments

This study’s survey was based on nine well-established constructs (41 items) which were all adopted or adapted from previous studies. Answers were scored on a six-point Likert scale from 1 (strongly disagree) to 6 (strongly agree). The first part of the survey collected background data (grade, gender, age, and hours spent on AI learning). The second part of the survey measured student AI literacy, subjective norms, AI anxiety, perceived usefulness of AI, perceptions of using AI for social good, attitude toward using AI, confidence in AI, AI optimism, and behavioral intention to engage in AI learning. The finalized items are presented in Table 1. The following is a brief description of the nine constructs of the survey.
AI literacy (six items, α = 0.91), AI anxiety (five items, α = 0.94), and confidence in learning AI (five items, α = 0.89) were measured based on our previous work [15]. Among these, AI literacy measured students’ perception of their knowledge and skills of using AI technology; AI anxiety measured students’ fears and sense of insecurity due to AI technology; confidence in learning AI measured students’ belief of their ability to do well in AI class. Subjective norms (four items), perceived usefulness of AI (four items), and attitude toward using AI (four items) were adapted by referring to the studies of Teo [42] and Teo et al. [31]. The adaptations were contextualization of the items towards the AI technology. Among these, subjective norms measured the perceived pressures or motivation on students to learn AI due to social relationships; perceived usefulness of AI measured students’ belief that using AI could improve their work productivity and outcome; attitude toward using AI measured students’ disposition toward using AI technology. Though Cronbach’s alpha values were not reported in the studies of Teo [42] and Teo et al. [31], reliability and validity of the items were assessed using composite reliability (CR) and the average variance extracted (AVE), and all items met the recommended guidelines (i.e., CR > 0.7; AVE > 0.5). AI for social good (five items, α = 0.92) and behavioral intention (four items, α = 0.90) were measured according to our previous work [2]. Among these, AI for social good measured students’ beliefs regarding the use of AI knowledge to solve problems and improve people’s lives; behavioral intention measured students’ intention to learn AI knowledge. AI optimism (four items, α = 0.77) scale was adapted from King and Caleon [40]. It was developed to measure students’ positive expectancy of performing well in school. In the present study, we contextualized it in the AI learning background.

3.3. Data Analysis

The data in this study were analyzed in two phases including confirmatory factor analysis (CFA) and structural equation modelling (SEM). Because our measurement model was developed according to an established theory, which described the relationships between the observed indicators and the unobserved constructs, CFA was performed to examine the reliability and validity of all the constructs in the questionnaire in the first phase. After that, SEM were performed to test the significance and strength of hypothesized relationships between these variables. The structural equation modelling (SEM) is currently a well-established statistical method and commonly employed to reveal the correlations among variables with the directions of influence being theoretically specified and tested. This multivariate approach is more powerful and less susceptible to bias compared with traditional statistical methods, because it is equipped to handle a set of relationships between endogenous variables and one or more exogenous variables, and take measurement error into consideration [43]. In addition, various indices are adopted to evaluate the quality of model fit. The fitness of the model was evaluated by the following indices: p < 0.001, chi-square/degree of freedom (χ2/df) < 3, the root mean square error of approximation (RMSEA) < 0.08, and the standardized root mean square residual (SRMR) < 0.05, Goodness of Fit Index (GFI) > 0.90, the Tucker–Lewis Index (TLI) > 0.90, the comparative fit index (CFI) > 0.90 [43].
Based on Bolen and Noble’s [44] explication, the fundamental equation of the structural model is shown as Equation (1):
η i = α η + B η i + Γ ξ i + ζ i
where η i   is a vector of latent endogenous variables for unit i, α η is a vector of intercept terms for the equations, B is the matrix of coefficients giving the expected effects of the latent endogenous variables (η) on each other, ξ i is the vector of latent exogenous variables, Γ is the coefficient matrix giving the expected effects of the latent exogenous variables (ξ) on the latent endogenous variables (η), and ζ i is the vector of disturbances. The i subscript indexes the ith case in the sample. The SEM incorporates a series of modelling steps based on multiequations, including model specification, model-implied moments, identification, estimation, model fit, and re-specification. For more information about the detailed modelling steps and equations for each step, please refer to Bollen and Noble [44] and Hair et al. [43]. Bollen and Noble [44], which gave a brief overview of the equations and modelling steps of SEM, pointed out that SEM usually formulates multiequations to handle the complex relationships among a set of variables. Because there are a number of equations with several variables in each equation, it is far too complicated for sociological researchers to compute by themselves. The availability and simplicity of software designed for SEM contribute to its popularity in the quantitative study. In this study, the CFA and SEM were performed on AMOS 22 package.
Before performing the analyses, univariate and multivariate normality tests were conducted to check the normal distribution of the data. The skewness (range: −0.911 to 0.282) and kurtosis (range: −0.962 to 0.147) were within the recommended values of |3| and |8|, respectively [45]. For multivariate normality, the Mardia’s coefficient in this study was 294.28, lesser than the requisite maximum of 37 × 39 = 1443, which was suggested should be lesser than (k [k + 2], k is the number of observed variables) [46].

4. Results

4.1. Analysis of the Measurement Model

The CFA was adopted to confirm the construct validity and the structure of the measurement model. A total of 37 items (factor loadings between 0.674 and 0.873) were kept in the measurement model. As detailed in Table 1, the examination of the composite reliability (CR) of each sub-scale was greater than 0.70, and the average variance extracted (AVE) exceeded the value of 0.50: AI literacy (CR = 0.90, AVE = 0.60), subjective norms (CR = 0.81, AVE = 0.51), AI anxiety (CR = 0.84, AVE = 0.58), perceived usefulness of AI (CR = 0.83, AVE = 0.56), AI for social good (CR = 0.82, AVE = 0.53), attitude toward using AI (CR = 0.85, AVE = 0.65), confidence in learning AI (CR = 0.89, AVE = 0.62), AI optimism (CR = 0.86, AVE = 0.67), and behavioral intention (CR = 0.92, AVE = 0.73), indicating satisfactory reliability and convergent validity of each sub-scale [43]. The Cronbach’s alpha coefficient was 0.90 for the AI literacy, 0.81 for the subjective norms, 0.84 for the AI anxiety, 0.83 for the perceived usefulness of AI, 0.82 for the AI for social good, 0.82 for the attitude toward using AI, 0.89 for the confidence in learning AI, 0.86 for the AI optimism, and 0.91 for the behavioral intention. The overall Cronbach’s alpha coefficient was 0.92, which indicated a sufficient reliability of the scales. Discriminant indexes were computed based on the AVEs. The square roots of the AVE of each construct were greater than the Pearson correlations for each pair of constructs (See Table 2).
On the other hand, the model fit was confirmed with p = 0.000 (<0.001), χ2/df = 1.448 (<3.0), RMSEA = 0.029 (<0.08), SRMR = 0.027 (<0.05), GFI = 0.92 (>0.90), TLI = 0.97 (>0.90), and CFI = 0.98 (>0.90) [43]. More generally, these results indicated that the survey items had good construct validity.
Figure 2 below provides the boxplots of the median, minimum, maximum, 25%, and 75% scores of all the nine factors measured. The boxplots reveal that the students’ responses for all the factors were more inclined towards agreeable side. The students’ realized the social expectation of them to learn about AI, and they perceived AI as useful for promoting social good, and they are slightly positive about the use of AI. They were optimistic and not too anxious about the use of AI, and they wished to learn more about AI. Nonetheless, they were almost neutral about their confidence to learn AI and their knowledge about AI. Overall, the results seem to fit with novices’ perspective, and they seem to indicate that the learning experiences were positive although there is apparently room to foster stronger intention to learn.

4.2. Analysis of the Structural Model

SEM was used for hypothesis testing. The indices indicated that the SEM model had a good fit: p = 0.000 (<0.001), χ2/df = 1.451 (<3.0), RMSEA = 0.029 (<0.08), SRMR = 0.044 (<0.05), GFI = 0.92 (>0.90), TLI = 0.97 (>0.90), CFI = 0.98 (>0.90) [43].
The estimated standardized path coefficients are presented in Table 3. AI literacy and subjective norms significantly predicted perceived usefulness of AI, AI for social good, attitude toward using AI, confidence in learning AI, and AI optimism; H1-H5 and H7-H11 were supported. However, AI literacy and subjective norms could not directly predict behavioral intention to learn AI; H6 and H12 were unsupported. AI anxiety significantly predicted AI for social good, attitude toward using AI, confidence in learning AI, and AI optimism; H14, H15, H17, and H18 were supported. However, AI anxiety could not directly predict perceived usefulness of AI and confidence in learning AI; H13 and H16 were unsupported. Perceived usefulness of AI significantly predicted attitude toward use AI, confidence in learning AI, AI optimism, and behavioral intention to learn AI; H20–23 were supported. Perceived usefulness of AI could not directly predict AI for social good; H19 were unsupported. AI for social good significantly predicted attitude toward using AI, confidence in learning AI, AI optimism, and behavioral intention to learn AI; H24–H27 were supported. Attitude toward using AI significantly predicted behavioral intention to learn AI; H29 was supported. However, attitude toward using AI could not directly predict confidence in learning AI and AI optimism; H28 and H30 were unsupported. Confidence in learning AI significantly predicted behavioral intention to learn AI; H32 was supported. Confidence in learning AI could not directly predict AI optimism; H31 was unsupported. AI optimism significantly predicted behavioral intention to learn AI; H33 was supported. As shown in Figure 3, most hypothesized relationships among the sub-scales were supported and the eight rejected hypotheses were indicated by dotted lines.

5. Discussion and Conclusions

The AI technology is changing how works are conducted, and the widespread infusion of AI is changing how we live, teach, and learn [4,5]. This has prompted new considerations of teaching AI in K-12 setting, which is a new area of research with little reports [1,15]. This study draws on the theory of planned behavior and also the associated technology acceptance model literature to identify factors that may shape secondary students’ intention to learn AI [7,23]. Students’ knowledge about AI, general AI anxiety, and subjective norms were relegated as background factors that influence their attitudes towards the behavior (i.e., perceived usefulness, social good, attitude towards use) and perceived behavioral control (confidence and optimism) and their behavioral intention towards learning AI. Overall, the findings reported support the general portrayal of the model as depicted in Figure 2. The findings are generally congruent with studies related to TPB and technology acceptance model in educational context [8,14,15,36]. Further discussion and implications of the study pertaining to the research aims are elaborated in the next few paragraphs.
First, nine factors based on the theory of planned behavior were identified to provide psychometric measures for variables related to the learning of AI. As the survey items were adopted or adapted with minor contextualized modifications, the CFA was employed to assess the construct validity along with other analyses. Overall, the findings support the survey constructed is valid and reliable and may be used to study secondary students’ learning of AI. Of the 5683 articles identified using the topic “theory of planned behavior” published in the Web of Science, only one relevant article to the use of AI products has been published [18]. This study thus extends the possible use of TPB for the study of the effects of secondary AI curriculum. More importantly, it provides a theory-driven validated survey for the research of AI education.
Second, out of the thirty-three hypotheses, 25 are statistically significant. The model constructed and depicted in Figure 1 is thus empirically supported. Secondary students’ intention to learn AI is correlated to all eight other factors (see Table 2), and it is influenced directly by AI anxiety, perceived usefulness, social good, attitude towards use, confidence and optimism and indirectly by AI literacy and subjective norms. The implications of this finding point to the need for AI curriculum designers to consider these factors.
Among the background factors, AI literacy is positively associated with all the attitudes towards the behavior and perceived behavioral control factors but not directly influencing behavioral intention. This is generally congruent with the theory of planned behavior in that people’s education is a major background factor that determines attitudes towards the behavior and perceived behavioral control [7]. Previous study among primary students reported similar findings [2]. It thus reinforces the previous conclusion that the teaching of knowledge forms the basis of the student’s attitudes towards the behavior and perceived behavioral control, but knowledge of AI alone may not be sufficiently motivating for students to have continuous intention to learn AI.
The subjective norms are a new factor in the study of students’ intention to learn AI that has not been previously reported. Similar to AI literacy, it does not directly predict students’ intention to learn AI. Nonetheless, it is a significant factor that predicts all the attitudes towards the behavior and perceived behavioral control factors. While Fishbein and Ajzen [7] put subjective norms at the same level as attitudes towards the behavior and perceived behavioral control, this study found that the factors used to denote attitudes towards the behavior and perceived behavioral control are predicted by subjective norms. As argued, subjective norms could be interpreted as a general social expectation that forms the background for one to view AI as an emerging technological phenomenon. More importantly, it seems that students in this study perceive that their socially significant others are expecting them to learn AI. This is a possible interpretation as the students are attending additional classes because their parents are allowing the students to spend extra time and are willing to pay the course fees; the school teachers are supporting their learning of AI with coaching; and the schools are providing the hardware and software needed. In other words, China’s secondary schools are starting to experiment with the teaching of AI in response to government policy [3].
A previous study on AI anxiety indicates that it is not an issue among primary school students and that the factor is not correlated to other factors like students’ sense of readiness, relevance of learning AI, and their confidence in learning AI [15]. This study however shows a different picture among secondary school students who have acquired substantial knowledge about AI. Their AI anxiety predicts their perception about social good, attitude towards use, optimism, and intention to learn AI. It seems that AI anxiety is negatively associated with students’ desire to design AI for social good and to learn AI; and it is negatively associated with optimism and attitude towards use. The power of AI is likely to challenge many and cause anxiety once they know the implications [19]. It seems clear that the way forward is to educate people so that they are prepared and are able to rise above. Technology, once created, never seems to be discarded because of its possible negative effects.
The factors associated with the attitudes towards the behavior were significant predictors of behavioral intention. It implies that for learners who are new to AI, the usefulness of AI, especially in areas related to the promotion of common good, should be highlighted. The fun and enjoyment of using AI applications such as co-creating poem with AI applications (see for example https://duilian.msra.cn/) could also be introduced. As indicated by our findings, illustrating the usefulness and enjoyment could also promote students’ perceived behavioral control (i.e., confidence and optimism) that reinforces their intention to learn. As an extension of computer engineering, there are many possibilities for learners to design AI applications that can promote social good. One such topic that the participants in this study experienced included coding robots to extinguish fire. It is thus important to structure opportunities for students to find social problems that AI may help to elevate. In addition, the design of the curriculum should also build students’ confidence in learning AI [15,47] and promote optimism in using AI. The findings support that strengthening students’ perceived behavioral control fosters their intention to learn AI. With a well-designed curriculum that emphasizes usefulness for common good [25,26], secondary students can be confident and optimistic, with their appropriate intention to learn AI enhanced.
In conclusion, this study investigated a number of factors that are interrelated and depicted a possible scenario for how the factors shape secondary students’ intention to learn AI. AI will change today’s society, and it seems necessary for ministries of education to consider the reforms needed in response to the impending changes [5]. Providing today’s learners with basic knowledge of AI, cultivating an orientation towards ethical use of AI to promote common good, and building positive experiences that enhance students’ confidence in learning AI are emerging issues for educators in general. Based on Fishbein and Ajzen’s [7] theory of planned behavior, this study hopes to provide some initial framework for future study. There are of course many other theoretical frameworks that could yield other useful insights. For example, it is possible to investigate how AI application could promote self-directed learning as intelligent tutoring systems for mathematics are becoming more adaptive. In addition, applications such as AI-enabled text generator, summarizer, translator, and grammar checker in conjunction with voice recognizer, chatbots, and voice inputs have many implications for independent language learning support. The self-determination theory [48] may be employed to see how AI technology enabled self-directed learning.

6. Limitations

The limitations of this study are, first, the limited sample of students we surveyed went through a similar AI curriculum that shaped their perspectives. The findings may not be generalizable to other students with other AI learning experiences. It should be noted that while there are different AI curricula being designed for elementary and secondary schools [28,29], each curriculum is unique in some way, and it shapes the students’ responses, the core content of AI does not deviate from the key intent of machine learning to be human-like, with visual and voice recognition, statistical prediction, etc. as the main content. In addition, as the theory of planned behavior is a well-established theory that studies people’s behavioral intention, the outcome of this study could be helpful for future researchers. Nonetheless, more studies about secondary school students’ intention to learn AI with larger samples and different AI training programs should be conducted. Second, this study has only included psychological factors of students. Facilitating conditions that are important for the learning of AI include having a properly equipped learning space. The physical provision of space and equipment has not been accounted for in this study. Future research may consider including information about the facilitating conditions from the students’ perspectives. Thirdly, this study uses self-reported data. Students’ actual learning outcomes such as objective tests about students’ declarative knowledge of AI and assessment of their coding should be included in future study. Finally, 33 hypotheses were tested in this study, and 25 hypotheses were accepted. While the multi-factor survey research allowed educators to derive a more holistic picture about the connections between the 9 factors, and there are only two insignificant correlations and three significant correlations at the level below p < 0.001, only 15 hypotheses were supported at the level of p < 0.001. The rest of the supported hypotheses may suffer from type I error. The supported hypotheses at the level between p < 0.05 and p < 0.01 have to be carefully interpreted.

Author Contributions

Conceptualization, C.S.C.; Investigation, X.W.; Methodology, C.S.C., X.W. and C.X.; Writing—original draft, C.S.C.; Writing—review & editing, C.S.C., X.W. and C.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

An example of the AI project integrated with mathematical knowledge.
Project Topic: Intelligent Fire-fighting Robot.
Learning Objectives: Students should learn about the mathematical logic, the concept of speed, the rule of positive proportion function, and how these mathematical knowledge guides the designing of intelligent machine.
Learning Activities: Many intelligent robots could navigate in an environment based on its self-exploration function. In the project of Intelligent Fire-fighting Robot, students are required to control the robots to search for and navigate to the spot of the fire.
The class teacher sets up the programming environment employing Convolutional Neural Network in advance in order to improve the accuracy of detection and location of the robots. The temperature sensors of robots are used to detect fire and return 0 or 1 value (no/yes). When no fire is detected, the robot will execute the search procedure repeatedly to detect the location of the robot, which is represented mainly by the equation
Z ( k ) = [ χ θ ] = [ ( x i x v x ) 2 + ( y i y v x ) 2 arctan x i x v y ( k ) x i x v x ( k ) x v θ ( k ) ]
x i and y i are the coordinates of the fire position, which are measured by infrared obstacle avoidance sensors. x v y ( k ) and x v x ( k ) are the position coordinates of the intelligent fire-fighting robot, which are measured by Global Positioning System. As for k , it is measured in seconds and denotes the time which the robot is in certain the position. The equations are pre-coded by the class teacher, and a simplified equation is presented to students (see next paragraph). When fire is detected, the fire-fighting robots will move to the fire spot as soon as possible.
Students are guided to understand and learn mathematical logic, concepts, and equation by completing two steps in this project. Firstly, students are supported to generate the logic flow of the program based on the task. This helps students to use the conditional loop structure to achieve the programming logic. The logic flow with which the instructor would guide the student is represented as Figure A1.
Figure A1. The chart of program logic flow.
Figure A1. The chart of program logic flow.
Mathematics 08 02089 g0a1
Secondly, students are asked to change the parameter of x i and y i and observe the change in these parameters, which will result in the change of robot’s movement speed. Students are guided to explore the relationships between distant and speed based on the observation and deeply understand the equation that speed is equal to distant divided by time. When x i and y i represent greater distant, the algorithm is written to inform the robot to travel faster. Using this discovery, it was explained that the fire-fighting robots are designed to speed up when distance increases in order to arrive to the fire spot in the shortest possible time. This is a proportional function problem in real life to trigger students’ intention to integrate their mathematical knowledge to deal with the challenges in real life.

References

  1. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education—Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef] [Green Version]
  2. Chai, C.S.; Lin, P.Y.; Jong, M.S.Y.; Dai, Y.; Chiu, T.K.F.; Qin, J.J. Primary school students’ perceptions and behavioral intentions of learning artificial intelligence. Educ. Technol. Soc. 2020, in press. [Google Scholar]
  3. Knox, J. Artificial intelligence and education in China. Learn. Media Technol. 2020. [Google Scholar] [CrossRef]
  4. Roll, I.; Wylie, R. Evolution and revolution in artificial intelligence in education. Int. J. Artif. Intell. Educ. 2016, 26, 582–599. [Google Scholar] [CrossRef] [Green Version]
  5. Seldon, A.; Abidoye, O. The Fourth Education Revolution: Will Artificial Intelligence Liberate or Infantilise Humanity; University of Buckingham Press: Buckingham, UK, 2018; ISBN 978-1908684950. [Google Scholar]
  6. Ajzen, I. The theory of planned behavior. In Handbook of Theories of Social Psychology; Van Lange, P.A., Kruglanski, A.W., Higgins, E.T., Eds.; SAGE: London, UK, 2012; pp. 438–459. ISBN 9780857029614. [Google Scholar]
  7. Fishbein, M.; Ajzen, I. Predicting and Changing Behavior: The Reasoned Action Approach; Psychology Press: London, UK, 2010; ISBN 978-1138995215. [Google Scholar]
  8. Mei, B.; Brown, G.T.L.; Teo, T. Toward an understanding of preservice English as a foreign language teachers’ acceptance of computer-assisted language learning 2.0 in the People’s Republic of China. J. Educ. Comput. Res. 2018, 56, 74–104. [Google Scholar] [CrossRef]
  9. Rubio, M.A.; Romero-Zaliz, R.; Mañoso, C.; de Madrid, A.P. Closing the gender gap in an introductory programming course. Comput. Educ. 2015, 82, 409–420. [Google Scholar] [CrossRef]
  10. Liao, C.; Chen, J.L.; Yen, D.C. Theory of planning behavior (TPB) and customer satisfaction in the continued use of e-service: An integrated model. Comput. Hum. Behav. 2007, 23, 2804–2822. [Google Scholar] [CrossRef]
  11. Davies, R.S. Understanding technology literacy: A framework for evaluating educational technology integration. TechTrends 2011, 55, 45. [Google Scholar] [CrossRef]
  12. Moore, D.R. Technology literacy: The extension of cognition. Int. J. Technol. Des. Educ. 2011, 21, 185–193. [Google Scholar] [CrossRef]
  13. Jong, M.S.Y.; Chan, T.; Hue, M.T.; Tam, V.W. Gamifying and mobilising social enquiry-based learning in authentic outdoor environments. J. Educ. Technol. Soc. 2018, 21, 277–292. [Google Scholar]
  14. Mohammadyari, S.; Singh, H. Understanding the effect of e-learning on individual performance: The role of digital literacy. Comput. Educ. 2015, 82, 11–25. [Google Scholar] [CrossRef]
  15. Dai, Y.; Chai, C.S.; Lin, P.Y.; Jong, S.Y.; Guo, Y.; Qin, J. Promoting students’ well-being by developing their readiness for the artificial intelligence age. Sustainability 2020, 12, 6597. [Google Scholar] [CrossRef]
  16. Botero, G.G.; Questier, F.; Cincinnato, S.; He, T.; Zhu, C. Acceptance and usage of mobile assisted language learning by higher education students. J. Comput. High. Educ. 2018, 30, 426–451. [Google Scholar] [CrossRef]
  17. Hoi, V.N. Understanding higher education learners’ acceptance and use of mobile devices for language learning: A Rasch-based path modeling approach. Comput. Educ. 2020, 146, 103761. [Google Scholar] [CrossRef]
  18. Sohn, K.; Kwon, O. Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telemat. Inform. 2020, 47, 101324. [Google Scholar] [CrossRef]
  19. Wang, Y.Y.; Wang, Y.S. Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interact. Learn. Environ. 2019, 1–16. [Google Scholar] [CrossRef] [Green Version]
  20. Parasuraman, A.; Colby, C.L. An updated and streamlined technology readiness index: TRI 2.0. J. Serv. Res. 2015, 18, 59–74. [Google Scholar] [CrossRef]
  21. Johnson, D.G.; Verdicchio, M. AI anxiety. J. Assoc. Inf. Sci. Technol. 2017, 68, 2267–2270. [Google Scholar] [CrossRef]
  22. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  23. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef] [Green Version]
  24. Dai, H.M.; Teo, T.; Rappa, N.A.; Huang, F. Explaining chinese university students’ continuance learning intention in the mooc setting: A modified expectation confirmation model perspective. Comput. Educ. 2020, 150, 103850. [Google Scholar] [CrossRef]
  25. Bryson, J.; Winfield, A. Standardizing ethical design for artificial intelligence and autonomous systems. Computer 2017, 50, 116–119. [Google Scholar] [CrossRef]
  26. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a Good AI Society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [Green Version]
  27. Duncan, C.; Sankey, D. Two conflicting visions of education and their consilience. Educ. Philos. Theory 2019, 51, 1454–1464. [Google Scholar] [CrossRef]
  28. Qin, J.J.; Ma, F.G.; Guo, Y.M. Foundations of Artificial Intelligence for Primary School; Popular Science Press: Beijing, China, 2019; ISBN 9787110099759. [Google Scholar]
  29. Tang, X.; Chen, Y. Fundamentals of Artificial Intelligence; East China Normal University: Shanghai, China, 2018; ISBN 9787567575615. [Google Scholar]
  30. Basharat, T.; Iqbal, H.M.; Bibi, F. The Confucius philosophy and Islamic teachings of lifelong learning: Implications for professional development of teachers. Bull. Educ. Res. 2011, 33, 31–46. Available online: http://www.pu.edu.pk/images/journal/ier/PDF-FILES/3-Lifelong%20Learning.pdf (accessed on 19 November 2020).
  31. Teo, T.; Zhou, M.; Noyes, J. Teachers and technology: Development of an extended theory of planned behavior. Educ. Technol. Res. Dev. 2016, 64, 1033–1052. [Google Scholar] [CrossRef]
  32. Dutot, V.; Bhatiasevi, V.; Bellallahom, N. Applying the technology acceptance model in a three-countries study of smartwatch adoption. J. High Technol. Manag. Res. 2019, 30, 1–14. [Google Scholar] [CrossRef]
  33. Liaw, S.-S. Investigating students’ perceived satisfaction, behavioral intention, and effectiveness of e-learning: A case study of the Blackboard system. Comput. Educ. 2008, 51, 864–873. [Google Scholar] [CrossRef]
  34. Rafique, H.; Almagrabi, A.O.; Shamim, A.; Anwar, F.; Bashir, A.K. Investigating the acceptance of mobile library applications with an extended technology acceptance model (TAM). Comput. Educ. 2020, 145, 103732. [Google Scholar] [CrossRef]
  35. Ajzen, I. Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior 1. J. Appl. Soc. Psychol. 2002, 32, 665–683. [Google Scholar] [CrossRef]
  36. Zhang, F.; Wei, L.; Sun, H.; Tung, L.C. How entrepreneurial learning impacts one’s intention towards entrepreneurship: A planned behavior approach. Chin. Manag. Stud. 2019, 13, 146–170. [Google Scholar] [CrossRef]
  37. Garland, K.; Noyes, J. Attitude and confidence towards computers and books as learning tools: A cross-sectional study of student cohorts. Br. J. Educ. Technol. 2005, 36, 85–91. [Google Scholar] [CrossRef]
  38. Lee, M.C. Explaining and predicting users’ continuance intention toward e-learning: An extension of the expectation–confirmation model. Comput. Educ. 2010, 54, 506–516. [Google Scholar] [CrossRef]
  39. Rhodes, R.E.; Courneya, K.S. Differentiating motivation and control in the theory of planned behavior. Psychol. Health Med. 2004, 9, 205–215. [Google Scholar] [CrossRef]
  40. King, R.B.; Caleon, I.S. School psychological capital: Instrument development, validation, and prediction. Child Indic. Res. 2020, 1–27. [Google Scholar] [CrossRef]
  41. Yildiz, H.D. Flipped learning readiness in teaching programming in middle schools: Modelling its relation to various variables. J. Comput. Assist. Learn. 2018, 34, 939–959. [Google Scholar] [CrossRef]
  42. Teo, T. Modelling Facebook usage among university students in Thailand: The role of emotional attachment in an extended technology acceptance model. Interact. Learn. Environ. 2016, 24, 745–757. [Google Scholar] [CrossRef]
  43. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 7th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2010; ISBN 978-8131776483. [Google Scholar]
  44. Bollen, K.A.; Noble, M.D. Structural equation models and the quantification of behavior. Proc. Natl. Acad. Sci. USA 2011, 108 (Suppl. 3), 15639–15646. [Google Scholar] [CrossRef] [Green Version]
  45. Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Guilford Press: New York, NY, USA, 2015; ISBN 978-1462523344. [Google Scholar]
  46. Raykov, T.; Marcoulides, G.A. An Introduction to Applied Multivariate Analysis; Routledge: New York, NY, USA, 2008; ISBN 9780805863758. [Google Scholar]
  47. Song, S.H.; Keller, J.M. The ARCS Model for the Design of Motivationally Adaptive Computer-Mediated Instruction. J. Educ. Technol. 1999, 14, 119–134. [Google Scholar] [CrossRef]
  48. Ryan, R.M.; Deci, E.L. Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemp. Educ. Psychol. 2020, 61, 101860. [Google Scholar] [CrossRef]
Figure 1. Hypothesized model. Note: Lit: AI literacy; SN: subjective norms; Anx: AI anxiety; PU: perceived usefulness; SG: AI for social good; ATU: attitude towards using AI; Confi: confidence in learning AI; OP: AI optimism; BI: behavioral intention.
Figure 1. Hypothesized model. Note: Lit: AI literacy; SN: subjective norms; Anx: AI anxiety; PU: perceived usefulness; SG: AI for social good; ATU: attitude towards using AI; Confi: confidence in learning AI; OP: AI optimism; BI: behavioral intention.
Mathematics 08 02089 g001
Figure 2. Boxplots for the nine factor scores. Note: Lit: AI literacy; SN: subjective norms; Anx: AI anxiety; PU: perceived usefulness; SG: AI for social good; ATU: attitude towards using AI; Confi: confidence in learning AI; OP: AI optimism; BI: behavioral intention.
Figure 2. Boxplots for the nine factor scores. Note: Lit: AI literacy; SN: subjective norms; Anx: AI anxiety; PU: perceived usefulness; SG: AI for social good; ATU: attitude towards using AI; Confi: confidence in learning AI; OP: AI optimism; BI: behavioral intention.
Mathematics 08 02089 g002
Figure 3. Results of the hypotheses testing. Lit: AI literacy; SN: subjective norms; Anx: AI anxiety; PU: perceived usefulness; SG: AI for social good; ATU: attitude towards using AI; Confi: confidence in learning AI; OP: AI optimism; BI: behavioral intention.
Figure 3. Results of the hypotheses testing. Lit: AI literacy; SN: subjective norms; Anx: AI anxiety; PU: perceived usefulness; SG: AI for social good; ATU: attitude towards using AI; Confi: confidence in learning AI; OP: AI optimism; BI: behavioral intention.
Mathematics 08 02089 g003
Table 1. Construct convergent validity.
Table 1. Construct convergent validity.
ItemsStandardized LoadingsCRAVE
AI Literacy (Lit): M = 3.62, SD = 1.01, Cronbach’s Alpha = 0.897
Lit2 I know the processes through which deep learning enables AI to perform voice recognition tasks.0.8030.8990.596
Lit1 I understand why AI technology needs big data.0.793
Lit6 I understand how computers process image to produce visual recognition.0.786
Lit3 I understand how AI technology optimizes the translation output for online translation.0.779
Lit5 I know how AI can be used to predict possible outcomes through statistics.0.750
Lit4 I understand how AI assistant such as SIRI or Hello Google handles human-computer interaction.0.719
Subjective Norms (SN): M = 4.22, SD = 0.89, Cronbach’s Alpha = 0.806
SN2 My parents support me to learn about AI technology.0.7590.8080.513
SN4 Most people I know think that I should learn about AI technology.0.713
SN4 Most people I know think that I should learn about AI technology.0.701
SN3My classmates feel that it is necessary to learn about AI technology.0.691
AI Anxiety (Anx): M = 3.26, SD = 0.95, Cronbach’s Alpha = 0.840
Anx5 I feel my heart sinking when I hear about AI advancement.0.8150.8440.577
Anx1 When I think about AI, I cannot answer many questions about my future.0.811
Anx2 When I consider the capabilities of AI, I think about how difficult my future will be.0.704
Anx4 I have an uneasy, upset feeling when I think about AI.0.700
Perceived Usefulness of AI (PU): M = 4.21, SD = 1.00, Cronbach’s Alpha = 0.829
PU1 Using AI technology enables me to accomplish tasks more quickly.0.8120.8330.555
PU4 Using AI technology enhances my effectiveness.0.757
PU2 Using AI technology improves my performance0.731
PU3 Using AI technology increases my productivity.0.674
AI for Social Good (SG): M = 4.11, SD = 0.88, Cronbach’s Alpha = 0.816
SG2 AI can be used to help disadvantaged people.0.8050.8190.532
SG3 AI can promote human well-being.0.742
SG1 I wish to use my AI knowledge to serve others.0.713
SG5 The use of AI should aim to achieve common good.0.650
Attitude toward using AI (ATU): M = 4.05, SD = 1.14, Cronbach’s Alpha = 0.844
ATU2 Using AI technology is pleasant.0.8460.8470.648
ATU4 I find using AI technology to be enjoyable0.790
ATU3 I have fun using AI technology.0.778
Confidence in learning AI (Confi): M = 3.52, SD = 1.03, Cronbach’s Alpha = 0.890
Confi3 I am certain I can understand the most difficult material presented in the AI classes.0.8260.8920.623
Confi5 I am confident I can understand the most complex material presented by the instructor in the AI classes.0.798
Confi2 As I am taking the AI classes; I believe that I can succeed if I try hard enough.0.793
Confi1 I feel confident that I will do well in the AI classes.0.772
Confi4 I am confident I can learn the basic concepts about AI taught in the lessons.0.754
AI Optimism (OP): M = 4.26, SD = 1.11, Cronbach’s Alpha = 0.859
OP1 I am hopeful about my future in a world where AI is commonly used.0.8180.8590.670
OP2 I always look on the positive side of things in the emerging AI world.0.817
OP4 Overall, I expect more good things than bad things to happen to me in the AI enabled world.0.821
Behavioral Intention (BI): M = 4.09, SD = 1.06, Cronbach’s Alpha = 0.913
BI1 I will continue to learn AI technology in the future.0.8730.9150.730
BI3 I will keep myself updated with the latest AI applications.0.861
BI4 I plan to spend time in learning AI technology in the future.0.848
BI2 I will pay attention to emerging AI applications.0.831
Note: CR: Composite Reliability; AVE: Average variance extracted. Lit: AI literacy; SN: subjective norms; Anx: AI anxiety; PU: perceived usefulness; SG: AI for social good; ATU: attitude towards using AI; Confi: confidence in learning AI; OP: AI optimism; BI: behavioral intention.
Table 2. Discriminant validity of constructs.
Table 2. Discriminant validity of constructs.
123456789
1.Lit(0.772)
2. SN0.251 ***(0.716)
3. Anx−0.074−0.124 **(0.760)
4. PU0.173 ***0.141 **−0.064(0.745)
5. SG0.249 ***0.254 ***−0.170 ***0.150 ***(0.729)
6. ATU0.542 ***0.302 ***−0.388 ***0.221 ***0.376 ***(0.805)
7. Confi0.514 ***0.361 ***−0.096 *0.415 ***0.320 ***0.425 ***(0.795)
8. OP0.418 ***0.452 ***−0.383 ***0.268 ***0.527 ***0.507 ***0.458 ***(0.819)
9. BI0.511 ***0.417 ***−0.276 ***0.318 ***0.533 ***0.649 ***0.602 ***0.692 ***(0.854)
Note 1. * p < 0.05, ** p < 0.01, *** p < 0.001. 2. Items on the diagonal are the square roots of the average variance extracted; off-diagonal elements are the correlation estimates. 3. Lit: AI literacy; SN: subjective norms; Anx: AI anxiety; PU: perceived usefulness; SG: AI for social good; ATU: attitude towards using AI; Confi: confidence in learning AI; OP: AI optimism; BI: behavioral intention.
Table 3. Hypotheses testing results from SEM.
Table 3. Hypotheses testing results from SEM.
HypothesisB Valuest-ValuesStandardized EstimateStatus
H1:Lit→PU0.139 **3.1250.162Accepted
H2:Lit→SG0.156 ***3.8540.199Accepted
H3:Lit→ATU0.569 ***11.1570.510Accepted
H4:Lit→Confi0.406 ***6.8290.395Accepted
H5:Lit→OP0.213 ***3.5770.206Accepted
H6:Lit→BI−0.030−0.527−0.026Not Accepted
H7:SN→PU0.113 *1.9630.108Accepted
H8:SN→SG0.204 ***3.8760.213Accepted
H9:SN→ATU0.127 *2.1890.093Accepted
H10:SN→Confi0.266 ***4.7510.211Accepted
H11:SN→OP0.343 ***6.0700.272Accepted
H12:SN→BI0.0270.4750.018Not Accepted
H13:Anx→PU−0.027−0.743−0.037Not Accepted
H14:Anx→SG−0.102 **−3.073−0.152Accepted
H15:Anx→ATU−0.352 ***−8.890−0.370Accepted
H16:Anx→Confi0.0140.3330.016Not Accepted
H17:Anx→OP−0.270 ***−6.562−0.306Accepted
H18:Anx→BI0.089 *2.0810.087Accepted
H19:PU→SG0.0831.8130.091Not Accepted
H20:PU→ATU0.119 *2.3410.091Accepted
H21:PU→OP0.132 **2.5620.110Accepted
H22:PU→BI0.143 **2.9510.103Accepted
H23:PU→Confi0.404 ***7.8500.336Accepted
H24:SG→ATU0.275 ***4.4100.193Accepted
H25:SG→Confi0.161 **2.6980.122Accepted
H26:SG→OP0.510 ***7.9790.386Accepted
H27:SG→BI0.205 **3.0810.134Accepted
H28:ATU→OP−0.008−0.124−0.008Not Accepted
H29:ATU→BI0.405 ***6.8600.379Accepted
H30:ATU→Confi0.0270.4220.030Not Accepted
H31:Confi→OP0.0751.3580.075Not Accepted
H32:Confi→BI0.237 ***4.5980.205Accepted
H33:OP→BI0.429 ***6.0070.372Accepted
Note: Lit: AI literacy; SN: subjective norms; Anx: AI anxiety; PU: perceived usefulness; SG: AI for social good; ATU: attitude towards using AI; Confi: confidence in learning AI; OP: AI optimism; BI: behavioral intention. * p < 0.05; ** p < 0.01; *** p < 0.001.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chai, C.S.; Wang, X.; Xu, C. An Extended Theory of Planned Behavior for the Modelling of Chinese Secondary School Students’ Intention to Learn Artificial Intelligence. Mathematics 2020, 8, 2089. https://doi.org/10.3390/math8112089

AMA Style

Chai CS, Wang X, Xu C. An Extended Theory of Planned Behavior for the Modelling of Chinese Secondary School Students’ Intention to Learn Artificial Intelligence. Mathematics. 2020; 8(11):2089. https://doi.org/10.3390/math8112089

Chicago/Turabian Style

Chai, Ching Sing, Xingwei Wang, and Chang Xu. 2020. "An Extended Theory of Planned Behavior for the Modelling of Chinese Secondary School Students’ Intention to Learn Artificial Intelligence" Mathematics 8, no. 11: 2089. https://doi.org/10.3390/math8112089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop