Introduction

Artificial Intelligence (AI) refers to technology that mimic human intelligent actions which by perceive, reason, act and interact with its environment (Dirican, 2015). It is redrawing the boundaries of what human and machine can do and hence redefining the relationship between human and machine (Jarrahi, 2018). Given its growing affordances, AI is being incorporated into many current organizations and technologies to optimize effectiveness and efficiency thus requiring workers to form new relationships with the intelligence now provided by machines. This has triggered off deep and profound reforms in the education sector (Seldon & Abidoye, 2018). For example, AI advancements have prompted the creation of intelligent teaching and learning machines that cater to adaptive personalization such as the intelligent tutoring systems (Fu et al., 2020; Zawacki-Richter et al., 2019) and pedagogical agents to stimulate students’ learning motivation (Veletsianos & Russell, 2014). Teaching computers to learn by providing students with teachable agents could also be an important way to foster learning (Matsuda et al., 2020). From these developments, it seems obvious that key players in today’s education have to consider the need for students to learn AI. Equipping students with basic AI literacy will provide students with a framework to understand AI as part of their epistemic resource for lifelong learning and working with AI. However, Zawacki-Richter et al., (2019) pointed out that research on AI education for K-12 setting is generally lacking. In addition, the current research on AI in education is focused on exploring the creation and use of AI systems to facilitate learning from multiple angles rather than understanding students’ engagement with AI (Hwang et al., 2020).

Recognizing the potential of AI for education, authorities and organizations have begun formulating policies and curriculum to prepare students for the AI-enhanced world. A prominent example is an effort from the China ministry of education to educate its citizen about AI (Knox, 2020) that has resulted in the publication of primary and secondary textbooks on AI (e.g., Qin et al., 2019; Tang & Chen, 2018). Given the significance of AI education, Hwang et al., (2020) advocated the need to establish a research database for this nascent area of research, including understanding students’ learning experiences with AI. For example, unlike other traditional technologies, AI is emerging and more disruptive, and its impact is drastic and real (Seldon & Abidoye, 2018).

Future development of AI provides new ways of approaching problems and improving people’s lives hence it is crucial for us to understand how AI would affects human beings and society in disastrous ways (e.g., change the future of work and get out of control). From an educational perspective, being cognizant of the potentials of AI allows stakeholders to campaign for the need to foster students’ readiness and strengthen their intentions to learn AI. Accordingly, this study aims to understand students’ behavioral intention to learn AI through an extended Theory of Planned Behavior (Fishbein & Azjen, 2010) which specifies the interrelationships among students’ perceived usefulness (PU), attitude towards using AI (ATU), subjective norms (SN) to learn AI, and basic literacy about AI (AIL). Given the importance of users’ previous knowledge and experience to the acquisition of new abilities and learning of new concepts (Holzinger et al., 2011), this study selected secondary school students who have studied AI knowledge to understand the structural relationships of their learning experiences. The model established can provide useful references for future AI curriculum design. In addition, this study also aimed to provide a nuanced understanding of students’ learning experiences through the moderation effects of students’ AI readiness (RD), social good (SG), and optimism for AI (OP). The study contributes to current concerns about AI education by providing insights on the interplay of the various psychological factors that shape students’ continuous intention to learn AI.

Literature Review

Theory of Planned Behavior and Artificial Intelligence

In this study, the theory of planned behavior (TPB) is employed together with its derived theory of technology acceptance model (TAM) (Davis, 1989) to form the theoretical underpinnings for the exploration of factors influencing students’ continuous intention to learn AI. The TPB has been widely used to account for factors that influence users’ behavioral intention to engage technologies (e.g., Cheon et al., 2012; Chai et al., 2020; Guerin et al., 2018; Teo et al., 2016). However, its application for students to learn AI has been limited. Although the researchers (Chai et al., 2020, 2021) have identified some factors associated with students’ BI to learn AI using the TPB as a framework, more studies are needed to develop a comprehensive understanding of learners’ planned behavior for learning AI. Specifically, little or no moderation effects were investigated in the previous studies, thus creating a gap in the literature from which the insights generated by this study aim to fill.

Fishbein and Azjen (2010) theorized human’s action as an outcome of reasoning that formed the behavioral intention (BI). Such reasoning is based on “the information or beliefs people possess about the behavior under consideration” (p. 20). The information people held shapes their understanding of the context, possible actions, and consequences. However, one’s understanding is not dictated solely by the information received. Individual differences arising from personality trait, beliefs and experiences also shape the BI formation. Fishbein and Azjen have identified attitude towards behaviors, subjective norms (SN), and perceived behavioral control as the three main factors to account for an individual’s BI. Past studies have extensively identified these factors as significant predictors of BI with substantial variances accounted for (Ajzen, 2012). In this study, BI is operationalized as students’ intention to learn AI. Based on the foregoing review and recent studies (Chai et al., 2020, 2021), it is reasonable to hypothesize that students’ attitude towards using AI (ATU), and their subjective norms (SN) are positively associated with their BI. The hypotheses were formulated as followed:

Hypothesis 1

ATU will be a significant influence on BI.

Hypothesis 2

SN will be a significant influence on BI.

Technology Acceptance Model and AI

Davis (1989) further contextualized the TPB as the TAM, and he formulated the attitude towards behavior as perceived usefulness (PU) and the perceived behavioral control was related to the perceived ease of use in the TAM (Bhatti, 2007). Perceived usefulness referred to the extent the user believes that using certain technology would facilitate their performance while perceived ease of use refers to the belief about the effort needed to use the system (Huang et al., 2020a). Both factors were found to be significant contributing factors to BI for the use of specific technology in many subsequent studies (e.g., Kumar & Mantri 2021; Weng et al., 2018;). Studies of TAM in the context of e-learning and mobile learning and learning about AI provided further support for the significant influence of PU on BI (Buabeng-Andoh, 2021; Chai et al., 2020; Park, 2009). To date, two studies have employed the TAM to explore the use of AI for educational purpose. Chocarro et al., (2021) reported that PU and perceived ease of use could predict teachers’ intention to use chatbots and Lin et al., (2021) found that, among PU, subjective norms, perceived ease of use, attitude towards using AI, perceived ease of use, and subjective norms were significant in explaining medical professionals’ intention to learn to use AI applications for precision medicine. Following the above research findings, the following hypothesis was formulated:

Hypothesis 3

PU will be a significant influence on BI.

Perceived usefulness was also established to be positively associated with attitude to use specific technology (Huang et al., 2020b; Kumar & Mantri, 2021; Teo et al., 2018; Weng et al., 2018). For example, Weng and his colleague have reported that teachers’ intention to use multimedia in classes that they teach was predicted by their perceived usefulness of the multimedia teaching materials. Hence, hypothesis 4 was formulated.

Hypothesis 4

PU will be a significant influence on ATU.

Literacy and Subjective Norms and factors influencing behavioral intention

Behavioral intention is influenced by background factors arising from personal beliefs and social influences Fishbein & Ajzen, 2010; Huang & Teo, 2020; Huang et al., 2019; 2020a). In general, beliefs about literacy refers to knowledge and competence an individual thinks he or she possesses for a specific action. Recent research on multiple technological literacies such as computer literacy, information literacy and digital literacy are found in the literature (Nichols & Stornaiuolo, 2019; Stopar & Bartol, 2019). Being technologically literate implies that one knows how a specific type of technology works and can use it to solve problems (Davies, 2011; Moore, 2011). AI literacy (AIL) is an emerging key concept in designing AI curriculum (Watkins, 2020) and, the knowledge and basic understanding of AI would form the basis upon which users’ beliefs such as their PU of AI are shaped. In other words, literacy may be an antecedent to PU (Mei et al., 2018). These relationships were capture below:

Hypothesis 5

AIL will be a significant influence on PU.

Hypothesis 6

AIL will be a significant influence on BI.

Social influence on students’ intention to learn is commonly denoted by subjective norms (SN). The latter refers to the extent to which people close to the user regard using technology as important. In an education context, these people include parents, peers, and teachers whose opinions about pedagogical matters have been found to have significant influence on students’ career choice and PU (Law & Arthur, 2003; Huang & Teo 2021). Cheng et al.’s (2016) investigated students’ attitude towards e-collaboration and found that SN had significantly predicted students’ BI to collaborate electronically. In the context of e-learning, SN frequently exerts significant influence on students’ PU (Abdullah & Wards, 2016). Thus, hypothesis 7 was formulated.

Hypothesis 7

SN will be a significant influence on PU.

Moderators: Readiness, Social Good and Optimism

Major technological advancement would bring about changes that are disruptive and triggered industrial revolution. While the third industrial revolution was initiated by personal computer during the time the Internet had just transformed the world in the last two decades, the fourth industrial revolution was driven by digitization and automation (Hirschi, 2018; Seldon & & Abidoye, 2018). AI is part of the enabler for the fourth industrial revolution. From the literature, an instrument known as Technology Readiness Index was developed to measure the extent to which one is ready to engage technology (Parasuraman, 2000; Parasuraman & Colby, 2015). Readiness was operationalized by these researchers as an individual’s tendency to use technology as means to fulfill selected goals. The scale of change forecasted for the AI age similarly had prompted discussion about AI readiness. For instance, Holmstrom (2021) proposed a framework for business organizations to assess the readiness of their personnel to be engaged in digital transformation with AI. In the education context, Chai et al., (2021a) adapted a sub-scale from the technology readiness scale to measure primary school students’ sense of readiness. Consequently, based on the previous findings among primary school students, it is reasonable to hypothesize that AI readiness could moderate the interrelationships among the five factors (PU, ATU, SN, AIL, BI) for the proposed extended TPB model in this study.

Hypothesis 8

Readiness will be a significant moderator between the paths stated in hypotheses 1–7.

Given the potential of AI, designing AI for social good (SG) has met with strong supports from AI researchers (Bryson & Winfield, 2017; Tucker, 2019; Floridi et al., 2020) described AI for SG as “the design, development, and deployment of AI systems in ways that (i) prevent, mitigate or resolve problems adversely affecting human life and/or the wellbeing of the natural world, and/or (ii) enable socially preferable and/or environmentally sustainable development” (p.1773–1774). This notion resonated with current textbooks and organizations involved with developing AI education (e.g., https://ai.google/education/social-good-guide/) that emphasized the importance of SG (Qin et al., 2019; Tang & Chen, 2018). Such emphasis is congruent with the commonly accepted aim of education in cultivating caring citizens and the curriculum focus on computing for common good in engineering schools (Goldweber et al., 2011). Psychologically, learning that enhances one’s ability to serve the society has been found to be a strong motivation for individuals to engage in such activity (Yeager & Bundick, 2009). In addition, research have found that social good may have moderated the relationship between factors with a positive influence on students’ BI to learn AI, such as subjective norm and literacy (Chai et al., 2020, 2021). Hence, hypothesis 9 is formulated.

Hypothesis 9

Social Good will be a significant moderator between the paths stated in hypotheses 1–7.

Engagement with AI can kindle fears and anxiety among people (Wang & Wang, 2019). This may be attributed to the perception that AI would replace many jobs and has the ability to collect and analyze abundant personal data for use by the authorities to manipulate individuals and society (Johnson & Verdicchio, 2017). The negative effects arising from the misuse of AI may cause anxiety that could distort students’ view about AI hence prevent meaningful learning from taking place. On the other hand, optimism (OP) towards AI may alleviate students’ fears and anxiety to enhance their BI to learn AI (Chai et al., 2020). Optimism is a psychological trait that encourages positive expectancies about future success (King & Caleon, 2021). Optimism is a factor in the Technology Readiness Index (Parasuraman, 2000) and has been reported to be associated with perceived ease of use, PU and BI (Walczuch et al., 2007). Optimism has also found to have an influence on AIL, SN, ATU and BI to learn AI (Chai et al., 2020). From the above discussion, it is reasonable to expect optimism to moderate the relationships in the research model (Fig. 2).

Hypothesis 10

Optimism will be a significant moderator between the paths stated in hypotheses 1–7.

Based on the above review, two research questions were formulated to guide this study:

Research question 1

To what extent does the proposed research model (Fig. 1) explain users’ behavioral intentions to learn AI?

Research question 2

Do readiness, social good, and optimism moderate all the relationships (Fig. 2) in the proposed research model?

Fig. 1
figure 1

Research model. (Note: PU = perceived usefulness, BI = behavioral intention, ATU = attitude towards using, SN = subjective norm, AIL = AI literacy, RD = readiness, SG = social good, OP = Optimism)

Fig. 2
figure 2

Research model indicating moderation effects. (Note: PU = perceived usefulness, BI = behavioral intention, ATU = attitude towards using, SN = subjective norm, AIL = AI literacy, RD = readiness, SG = social good, OP = Optimism)

Method

Participants

Purposive sampling was adopted for this study. The participants were 511 students who have completed one AI enrichment program in secondary schools from the Northeastern cities in China such as Qingdao and Tianjin. The age range of the participants was between 14 and 18 (Mean = 14.3, SD = 1.52; grade 8–12; Female = 44.6%). They were invited via WeChat, informed about the purpose of the research, and provided with the link for the online survey questionnaire. Participation was voluntary and, upon completion of the online questionnaire, participants received a small token of appreciation provided over WeChat.

The AI course context

Participants attended an AI enrichment program prior to competing the survey questionnaire. In this course, students were taught to write a program to create an intelligent fire-fighting robots that could be launched to autonomously search and extinguish fire in a building. In the process, students learnt about computer vision, heat sensor, the working of a robot car, and coding the robots using python-based programming language. The success criteria were finding and extinguishing the fire in the fastest time. To do so, the robot has to be programmed to identify the sources of fire and compute the fastest route based on environmental data input. From these activities, students learnt the core concepts of data representation, visual recognition, machine learning, and the coding of the algorithm- commonly identified as core concepts for an AI curriculum in schools (Qin et al., 2019; Tang & Chen, 2018). The 32 1-hour sessions were taught after school hours as an enrichment programme.

Instrument

The survey questionnaire in this study measured eight factors with a total of 34 items adapted from published sources (Chai et al., 2020, 2021). Table 1 shows the details of the survey instrument. Each item in the instrument was rated with a 6-point Likert scale (1 = strongly disagree to 6 = strongly agree). Internal consistency (Cronbach Alpha) of each factor ranged from 0.82 to 0.91.

Table 1 Details of the Survey Instrument

Data Analysis

The skewness and kurtosis were examined to ensure that the data did not violate the univariate normal distribution assumption. Results showed that they were within the recommended value (skewness= -0.831 to 0.188; kurtosis= -1.01 to 0.058). To test the measurement model, a confirmatory factor analysis (CFA) on a congeneric model with uncorrelated errors using the maximum likelihood estimation was performed. To test for the multivariate normality of the observed variables, the Mardia’s (1970) normalized multivariate kurtosis value was computed. The multivariate value in this study was 110.887, less than the recommended value (p (p + 2)) by Raykov & Marcoulides (2008), where p stands for the total number of observed items (21(23) = 483), supporting the assumption of multivariate normality of the data in this research. In addition, the good-of-fit indices, composite reliability (CR), average variance extracted (AVE), and discriminant validity were computed and examined. Finally, the structural model and the seven hypotheses were tested. Lastly, the moderation effects of readiness, social good and optimism on the relationships in the proposed model were tested and analyzed.

Results

Testing the measurement model

In determining the fit of the measurement model, several fit indices were used: minimum fit function (χ2) and the ratio of the χ2 to its degree of freedom (χ2/df), with a value lower than 3.0 regarded as acceptable (Carmines & Mclver, 1981). The Comparative Fit Index (CFI) and Tucker-Lewis Index (TLI) should be above the value of 0.90 to indicate an acceptable fit (Hair et al., 2010). In addition, the Root Mean Square Error of Approximation (RMSEA) with the value in the range of 0.08 to 0.10 regarded as an acceptance fit (Browne & Cudeck, 1993). Finally, the Standardized Root Mean Residual (SRMR) with the value less than 0.08 was used to indicate an acceptable fit (Hu & Bentler, 1999). The CFA yielded an acceptable model fit (Chi-square/df = 2.343, CFI = 0.955, GFI = 0.929, TLI = 0.948, RMSEA = .-51 [0.045, 0.057], SRMR = 0.0738). The composite reliability (CR) and average variance extraction (AVE) were used to test the reliability and convergent validity of each variable. Fornell & Larcker (1981) stated that the CR and AVE values of 0.50 or above can indicate adequate reliability of the measurement model. Furthermore, the standardized estimate (SE) of each item was tested, with a value higher than 0.50 indicating that an item has contributed significantly to explain its underlying construct (Hair et al., 2010). Based on the information presented in Table 2, we find support for an acceptable measurement model fit.

Table 2 Factor loadings of constructs

Discriminant validity, which reflects the extent to which a variable is unique and not just a.

reflection of other variables (Peter & Churchill, 1986), was assessed using the Fornell–Larcker criterion (Fornell & Larcker, 1981). To satisfy this criterion, the square roots of the AVEs for two latent variables must each be greater than the correlations between those two variables. In Table 3, the square root of the AVEs is highlighted in bold along the diagonal, showing that the Fornell–Larcker criterion is met; that is, all the diagonal values are greater than the off-diagonal numbers in the corresponding rows and columns.

Table 3 Discriminant validity of constructs

Testing the structural model

A test of the structural model (Fig. 1) was conducted, and the fit indices indicated that the model was adequate (Chi-square/ df = 3.302, CFI = 0.922, GFI = 0.902, TLI = 0.912, RMSEA = 0.067 [0.062, 0.073] (Hair et al., 2010). In addition, the R2 for BI is 0.65, R2 for ATU is 0.08, and R2 for PU is 0.04. In this study, all hypothesized relationships were supported (Table 4).

Table 4 Results of relationships

Testing the moderation effects of readiness, social good and optimism

A path-by-path methods of moderation analysis revealed that readiness (RD), social good (SG), and optimism (OP) moderated some relationships in the research model, with chi-square values greater than 90% threshold, as shown in Table 5. In particular, the results indicate that RD had moderated the paths between PU→ATU, AIL→PU and PU→BI while SG moderated paths between PU→BI, ATU→BI, SN→BI. Finally, OP was a moderator for the paths between PU→BI and SN→BI.

Table 5 Results of moderation effects

Discussion

This study proposed an extended Theory of Planned Behavior (TPB) and, using structural equation modeling, tested seven hypotheses that were formulated to represent the inter-relationships in the proposed model. Overall, the findings supported that students’ BI to learn AI can be explained by the proposed TPB model, with 65% of variance accounted for BI by its antecedents (AIL, PU, SN, and ATU). The study provides support for the applicability of the TPB and TAM as the theoretical underpinnings to help researchers in understanding students’ BI to learn AI. In general, the findings were mostly congruent with a recent study (Chai et al., 2020) in which the model obtained in this study seems to be more consistent with the TPB. Given the current recognition of the importance of AI as a game changer for how we learn and work (Seldon & Abidoye, 2018), this study provides valuable insights for educators to understand students’ learning experiences. The findings in this study further address the current lack of research about AI education in the primary and secondary settings (Zawacki-Richter et al., 2019). With current AI research focusing on system design and testing (Fu et al., 2020; Hwang et al., 2020), this study provides a validated model for research into teaching and learning AI in education settings. We would argue that it is important to equip students with AI knowledge with an aim to better prepare them to leverage emerging AI technology for life-long learning with AI and about AI, which is necessary for the dynamic workplace they would be confronted with.

Three hypotheses associated to TAM were supported in this study. These suggest that, to foster students’ intention to learn AI, it would be important for teachers and AI program developers to foster students’ understanding about the usefulness of and a positive attitude towards using AI. In addition, perceived usefulness was positively associated attitude to use AI (i.e., hypotheses 1,3,4). These supported relationships are in general agreement with the myriad of studies involving the use of TAM that supported the significant role of perceived usefulness and attitude to use in predicting users’ behavioral intention (e.g., Buabeng-Andoh 2021; Park, 2009, Kumar & Mantri, 2021, Weng et al., 2018). Technologies are created to be useful despite a lack of guarantee that users will possess a positive attitude to ensure their intention to use it. For example, nuclear power, regarded by many as the epitome of human invention, is not welcomed in some parts of the world for its destructive power (e.g., Bian et al., 2021). In contrast, despite possible abuses of AI, the Chinese secondary students in this study held positive views on the usefulness of AI and had positive attitude towards using AI. They were trained to build an intelligent firefighting robot, which situate learning in a socio-technological context that highlight usefulness and social good. There are countless possibilities in structuring such examples that could be crafted as design-based challenges which require some forms and levels of intelligence to resolve. To cater to students’ individual needs, training programs should offer different challenges for students to choose through collective design by teachers from diverse backgrounds (Chiu et al., 2021). Empowering students to formulate their own challenges based on their personal concerns could also be a viable strategy. The findings in this study also supported the notion that, to learn a specific technology as subject matter (such as AI), perceived usefulness and attitude towards using that technology are important factors. The current AI textbooks in China (Qin et al., 2019; Tang & Chen, 2018) were structured around useful application of AI including cancer diagnosis and language translation. However, it is more important that students understand how the affordances of AI could be harnessed to perform tasks that they could relate to and experience in their daily life.

Subjective norms were found to be a significant influence on BI and PU (i.e., Hypotheses 2 and 7). This is consistent with the Theory of Planned Behavior Ajzen (2012), recent research in e-learning (Abdullah & Wards, 2016), and language teaching in China (Huang et al., 2021) where SN predicts BI. In the Chinese education context, teachers, school authorities, and the government are regarded as significant opinion leaders. With the advocacy of the Chinese government for AI education in the schools (Knox, 2020) and, considering the collectivist sociocultural inclination in the Chinese society (Hofstede, 2001), it follows that students’ intention to learn AI in this study had been influenced by these opinion leaders. Given the important role that subjective norm plays in AI acceptance, key stakeholders in education (e.g., school leaders, teachers, peers, etc.) who exert influence on young students’ opinion should share their positive experiences in engaging AI with students in the hope that, over time, they will form an impression that AI is useful for their learning hence strengthening their intention to use AI. The platforms for such exchanges include the classrooms, seminars, and training workshops.

In recent years, AI literacy has captured the attention of many educators (Watkins, 2020). This study found that AI literacy could predict students’ BI (Hypothesis 5 and 6) and is congruent with the theory of planned behavior which posits that knowledge about an object is key in deciding whether one would be engaged with that object (Fishbein & Ajzen, 2010). Consistent with previous studies (e.g., Chai et al., 2020; 2021), AI literacy is found to be a significant predictor for perceived usefulness in this study.

Three factors (Readiness, Social Good and Optimism) were hypothesized to moderate the path-to-path relationships in the proposed model (Fig. 2).

Readiness indicates a positive confidence with regards to using technologies to accomplish personal and work-related goals without uncontrollable outcomes (Parasuraman & Colby, 2015). The notion of readiness is akin to perceived behavioral control of TPB, which accounts for self-efficacy and controllability (Ajzen, 2002). In this study, readiness moderated the paths PU◊ BI and PU ◊ ATU indicating the role of students’ personal assessment of their ability to use and control the outcomes of using AI technologies in influencing their BI and ATU. This constitutes a more subtle understanding of usefulness in that what is perceived to be useful can be influenced by the perception of ones’ ability to control the outcomes. The teaching of AI should inspire students’ confidence that the technology and its use can be managed and regulated through personal agency.

The non-moderated paths (SN◊PU; SN◊ BI; ATU◊ BI, AIL◊PU, AIL◊ BI) suggested that SN and AIL were not influenced by the notion of readiness. This suggests that the significant positive relations of these paths were strong enough that students’ perception of readiness did not moderate the statistical relationships.

Of the four paths hypothesized to influence BI, Social good was found to moderate three (i.e. PU◊BI, SN◊BI, ATU◊BI) except for AIL◊ BI. The latter finding suggests that understanding the core concepts of AI (literacy) was sufficient to trigger students’ desire to learn AI. Students who understand how AI technologies can be useful to address problems and challenges in the society (i.e. PU◊ BI). In addition, support from teachers, friends, and parents (i.e., subjective norms) for students to learn AI (i.e., SN◊BI) is likely to be influenced by using AI for social good, given the socialist character of the China society (Hofstede, 2001). Being an identified focus of most AI curriculum (Floridi et al., 2020; Tang & Chen, 2018) and promoting social good to students learning AI is congruent to digital citizenship education. The promotion of social good would also enhance students’ attitude to use AI (i.e., ATU◊BI). For example, in a well-designed AI curriculum, the role of AI in predicting natural hazards and disasters or increasing the diagnostic precision when interpreting medical images could be highlighted (see curriculum design in Chiu et al., 2022).

The paths that were not moderated by social good (SN◊PU; PU◊ ATU; AIL◊PU, AIL◊ BI) may suggest students’ perception of social good was not statistically influencing these significant positive paths.

Optimism was found to moderate the relationships between perceived usefulness and attitude to use AI and the students’ intention to learn (i.e., PU◊BI; SN ◊ BI). The moderated patterns were similar to social good except for ATU◊ BI. Optimism is a personality trait (Parasuraman, 2000; King & Caleon, 2021) that may help to reduce fears and encourage positive reactions. The finding of this study is congruent with previous studies (Walczuch et al., 2007; Chai et al., 2020) that found people to be concerned about the future development of AI technologies and were worried that technologies might get out of control and affect society in disastrous ways (Johnson & Verdicchio, 2017; Wang & Wang, 2019). Therefore, when one is inclined not to view the AI technologies in an optimistic way, the effects of the perceived usefulness and the opinions by socially significant others would be diminished along with his/her BI. From the literature, AI can be abused to identify, track, monitor, and analyze individual’s personal data to new levels of power and speed and in ways that can invade personal privacy (Bryson & Winfield, 2017; Tucker, 2019). Therefore, it is very important to emphasize AI ethics and human bias when designing AI teaching units (Chai et al., 2021) and to make sure the ethical, legal, and societal issues are fully considered (Mueller et al., 2021). In addition, some of the concerns, fears and anxieties are caused by misunderstanding of and confusion about what AI technologies are and can do (Wang & Wang, 2019). While optimism as a personality trait may not be easily changed, it seems important to foster in students a positive outlook by well-designed curricula that keep students informed and boost the students’ confidence in the applicability of AI technologies (Chai et al., 2020).

The paths that were not moderated by optimism (SN◊PU; PU◊ ATU; ATU◊BI, AIL◊PU, AIL◊ BI) indicate that AIL was not influenced as with other moderators, and that SN◊PU; PU◊ ATU; ATU◊BI were not influenced by optimism. This could be because optimism is a personality trait that possess a level of stability inconsistent with that of the other variable in this study.

All three moderators influence the path between PU and BI. This implies that students’ view about learning AI to advance personally selected goals (Parasuraman & Colby, 2015), to create personally bright future (King & Caleon, 2021) and to contribute to society (Floridi et al., 2020) are related to their perception of the usefulness of AI and their intention to learn AI. The finding of this study refined our current understanding of what may constitute and influence perceived usefulness. They also provide insights to support our view that AI curriculum design should highlight how acquiring AI knowledge can put one in a strong position in an AI-enhanced world to foster their intention to learn.

Limitation of the study.

This study relied on students’ self-report data to measure items from eight constructs specified in a proposed model within the TPB and TAM framework. As such, the findings may be context sensitive. Given that students’ responses were curriculum-dependent, the proposed model albeit statistically valid, may have limited generalizability to other contexts. In addition, the factor of perceived ease of use was not included in this study. It may be considered as a generic factor of TAM. Similarly, perceived behavioral control is a generic factor for TPB. It was also not included in this study. Future research may consider including these factors and other learning contexts in the research model. In the similar vein, there are different ways to contextualize other relevant attitudes or background factors for the learning of AI. Future research may argue for other relevant factors to bring more light to this area of research.

Conclusions

Overall, this study has contributed to current literature by providing a psychological model of learning AI based on the TPB and TAM. It highlighted important factors that AI curriculum designer should consider. As one of the first few studies in secondary education sector for AI education (Zawacki-Richter et al., 2019), this study has the potential to lay a foundation for subsequent studies as issues related with AI and education are likely to emerge as it continues to capture the attention of researchers in the foreseeable future.