Next Article in Journal
MSProfileR: An Open-Source Software for Quality Control of Matrix-Assisted Laser Desorption Ionization–Time of Flight Spectra
Previous Article in Journal
Analysing the Impact of Generative AI in Arts Education: A Cross-Disciplinary Perspective of Educators and Students in Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chatbot Technology Use and Acceptance Using Educational Personas

by
Fatima Ali Amer jid Almahri
1,2,*,
David Bell
2 and
Zameer Gulzar
3
1
Department of Information Technology, College of Computing and Information Sciences, University of Technology and Applied Sciences-Salalah, Salalah 211, Oman
2
Department of Computer Science, College of Engineering, Design and Physical Sciences, Brunel University London, London UB8 3PH, UK
3
MIE-SPPU, Institute of Higher Education, Doha 14868, Qatar
*
Author to whom correspondence should be addressed.
Informatics 2024, 11(2), 38; https://doi.org/10.3390/informatics11020038
Submission received: 29 January 2024 / Revised: 5 May 2024 / Accepted: 10 May 2024 / Published: 3 June 2024
(This article belongs to the Section Human-Computer Interaction)

Abstract

:
Chatbots are computer programs that mimic human conversation using text or voice or both. Users’ acceptance of chatbots is highly influenced by their persona. Users develop a sense of familiarity with chatbots as they use them, so they become more approachable, and this encourages them to interact with the chatbots more readily by fostering favorable opinions of the technology. In this study, we examine the moderating effects of persona traits on students’ acceptance and use of chatbot technology at higher educational institutions in the UK. We use an Extended Unified Theory of Acceptance and Use of Technology (Extended UTAUT2). Through a self-administrated survey using a questionnaire, data were collected from 431 undergraduate and postgraduate computer science students. This study employed a Likert scale to measure the variables associated with chatbot acceptance. To evaluate the gathered data, Structural Equation Modelling (SEM) coupled with multi-group analysis (MGA) using SmartPLS3 were used. The estimated Cronbach’s alpha highlighted the accuracy and legitimacy of the findings. The results showed that the emerging factors that influence students’ adoption and use of chatbot technology were habit, effort expectancy, and performance expectancy. Additionally, it was discovered that the Extended UTAUT2 model did not require grades or educational level to moderate the correlations. These results are important for improving user experience and they have implications for academics, researchers, and organizations, especially in the context of native chatbots.

1. Introduction

Chatbots (also known as Conversational Agents, bots, IM bots, Smartbots, or Talkbots) are computer programs designed to simulate an intelligent conversation with one or more human users via auditory or textual methods using natural language. Well-known examples of chatbots are Apple Siri and Amazon Alexa. Typical functionalities include providing information about the weather, scheduling meetings, tracking flights, giving up-to-date news, and finding restaurants, to name a few. Chatbots can also be used as a powerful tool in education. They can work as a language tutor, as in the example known as Sofia [1]. Chatbots can also assist with the teaching of mathematics and help users to solve algebra problems, as with the Pari and Mathematica chatbots. In the study of medicine, chatbots help medical students by simulating patients and providing responses during interviews with medical students; an example of this type of chatbot is the Virtual Patient bot (VPbot).
This paper reports on part of a study that comprised three stages or iterations (Figure 1). The first iteration identified student groups at Brunel University London by building data-driven persona development models for university students. The outcomes of this stage were the persona template, persona model, and proposed data-driven development method [2]. This second iteration identified acceptable chatbot features by evaluating an extended UTAUT2 model. The third iteration will evaluate the effectiveness of the persona modeling approach by designing and developing chatbot instantiation (future work).
This paper uses a persona lens in the form of persona elements to create an Extended UTAUT2 model. Each iteration followed a build-and-evaluate cycle to produce artifacts and to design the cycle steps [3].

1.1. A Persona Lens

The persona concept was coined in 1999 by Alan Cooper in Chapter 9 of his book The Inmates are Running the Asylum [4]. Personas have become conventional design methods that are widely used. However, there is no standard definition for ‘persona’. The literature presents persona as user-centered design (UCD) methods that represent a group of users who share common goals, attitudes, and behaviors during interaction with a product [5,6]. Initially, as UCD became more popular, the usability of systems, websites, and products [7,8] increased customer-centered design, often known as ‘human-centered design’. This is a style of design that involves consumers or users in the design process [7,8]. Although UCD has expanded rapidly, there is still considerable dissatisfaction with the design of current items. Many businesses have neglected to prioritize customer needs as the most important part of the design process [9]. As a result, a large number of design processes have failed to reach the intended customers or consumers [8,10]. Usability concerns with goods, systems, and websites have been thoroughly documented, indicating that current product design procedures need to be improved. Many products are returned because they are difficult to operate or because the users are unable to use the features they want [8].
With current UCD methodology, personas provide a solution to some of these issues. [4] developed the ‘persona’ notion as a design process methodology [11]. Personas are made-up archetypes of real users, not real individuals [11]. “A precise portrayal of a hypothetical user and what s/he desires to accomplish” neatly summarizes personas ([12], p. 1). A ‘target customer characterisation’, ‘profile’, or ‘user archetype’ is another term for a persona [13]. Persona development is a different way of representing and communicating the demands of people [8]. A great deal of research has been conducted on persona templates [14,15], creating personas [13,16,17], and determining what they are good for [4,13,17]. Persona development is becoming more popular as a design technique as it identifies the fundamental characteristics of consumers, which can be exploited in product design and marketing [17]. It is also a cost-effective solution to enhance users’ experiences with products and services [13]. Furthermore, personas provide powerful representations of target users to product designers [8].
Personas are used to represents common clusters of traits [18,19], for example, to identify groups of students with similar characteristics or features. Personas also provide other advantages, including (1) gaining a deeper understanding of users; (2) determining early design needs; (3) aiding design thinking; (4) ensuring a focus on users’ goals, requirements, and characteristics; (5) facilitating stakeholders’ communication; and (6) considering political and social concerns in design decisions [6]. However, according to [12], there are several issues with persona development. One of these is the development of personas that are not based on first-hand evidence [20], which is not the case in this study. Personas can be unreliable if they lack clear connection to the facts, such as when they are created by a committee [20]. Several studies cover the best practices with personas [20,21]. Creating a persona can be challenging because they are not based on first-hand customer data [12,22], and in some cases, the sample size is statistically insufficient [12,22]. Data-driven personas were proposed by [12,23], for example, based on clickstreams [23,24] or statistical data [12,23]. Machine learning methods, more specifically K-means clustering, are used to build personas [2,18,19,25].

1.2. The Extended Unified Theory of Acceptance and Use of Technology (UTAUT2)

The acceptance and use of technology is both a popular and practical subject, resulting in several models being developed from theories within sociology and psychology [26,27] developed the Unified Theory of Acceptance and Use of Technology (UTAUT) to synthesize existing acceptance theories and models, as well as to study student acceptance and use of technology in an organizational context. UTAUT was developed as a result of reviewing eight main theories and models of technology acceptance: the Theory of Reasoned Action (TRA), the Technology Acceptance Model (TAM), the Motivational Model (MM), the Theory of Planned Behavior (TPB), Combined TAM and TPB, the Model of PC Utilization, Diffusion of Innovation Theory (DoI), and Social Cognitive Theory (SCT). UTAUT consists of four constructs, as shown in Figure 2, namely performance expectancy, effort expectancy, social influence, and facilitating condition. UTAUT factors (constructs) affect the behavior intention (BI) and usage of technology. Impacting these constructs are four moderators—the age, gender, experience, and voluntariness of use [27].
UTAUT constructs are similar to other constructs in other models. For example, performance expectancy (PE) and effort expectancy (EE) are similar to two TAM constructs, Perceived Usefulness (PU) and Perceived Ease of Use (PEOU), respectively. Moreover, social influence (SI) is similar to the Social Norm (SN) in TRA, and facilitating conditions (FC) are similar to PBC in TPB. Multiple sectors use UTAUT, such as E-government [28], online banking [29,30], and health/hospital IT [30]. However, it has received less attention than other existing models. There are also criticisms related to its explanatory power and parsimony [31].
The UTAUT was extended by Venkatesh et al. (2012) [26] and named The Extended Unified Theory of Acceptance and Use of Technology (UTAUT2). UTAUT and UTAUT2 were developed for different environments. The former was developed for the organizational context, while the latter was conceived for a consumer context (Venkatesh, Thong and Xu, 2012) [26]. As well as the four constructs found in UTAUT (PE, EE, FC, and SI), UTAUT2 has three additional constructs: hedonic motivation (HM), price value (PV), and habit (HT), as shown in Figure 3. UTAUT has four moderators—experience, gender, age and voluntariness of use—while UTAUT2 contains only the first three, without voluntariness of use.

1.3. From Persona Elicitation to Technology Acceptance

Earlier work, iteration 1, used K-means clustering to identify trait clusters [32]. Eight personas resulted in the persona features shown in Figure 4, including demographic data (i.e., age and gender), educational data (i.e., level of study), virtual engagement (i.e., engagement with Virtual Learning Environments), physical engagement (i.e., attendance), and performance data (i.e., grade). Further features are added from the literature review to build the persona model (Figure 5). These elements of personas were incorporated into the survey question (See Appendix A and Appendix B) as well as into the proposed Extended UTAUT2 model (Section 1.4, Figure 6). To explain this further, persona design typically names and describes user archetypes with a mix of visual and narrative content (as seen in the Top Student example in Figure 5). Our prior work [32] added a more analytical approach to persona design using K-means to uncover clusters (persona) and unique characteristics. Subsequently, when constructing an extended UTAUT2 model, we are able to select the composite persona name or the unique attributes of the persona. Unique attributes were chosen as they would enable further exploration of attribute importance. This added detail may also be able to inform the persona design with a prioritized focus on the narrative around important factors.
The objectives of this study are as follows:
To investigate how specific student groups (modeled as personas) differ in their use and adoption of chatbot technology.
To comprehensively examine the UTAUT model and its extension, UTAUT2, in a variety of contexts, with an emphasis on their structures, moderators, and applications.
To identify the main determinants of students’ acceptance and use of chatbot technology using UTAUT2.
To improve understanding in the area of technological acceptance and to inform decision-making processes by elucidating the factors influencing technology adoption and usage.
These objectives are essential for improving adoption strategies, boosting decision-making processes, and comprehending user variation in technology adoption. Deepening our understanding of user preferences and demands, this study looks at UTAUT and UTAUT2 in a variety of scenarios. The understanding generated will direct the creation of efficient technology solutions that meet user requirements and promote higher adoption and utilization. Furthermore, decision-makers across sectors can benefit from insights into user approval, which can help them to formulate effective implementation strategies. Determining the factors impacting adoption aids in the improvement of adoption strategies, resulting in more seamless integration. This addition to the corpus of information on technological acceptability will stimulate new research and creative thinking in the development and application of technology.

1.4. The Proposed Conceptual Model and Hypotheses

This section covers the design of the proposed Extended UTAUT2 model. UTAUT2 [26] is employed to examine how students use technology, in this case, the chatbot. It explains the intention to use (BI) and seven constructs: performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, price value, and habit. The moderators in UTAUT2 are age, gender, and experience. However, in this study, price value was excluded as the proposed chatbot was free to use. An expanded list of moderators, including UTAUT2 moderators, was also included in the proposed model and tested in the evaluation phase. The moderator of the proposed model is a persona; K-means clustering analysis was utilized with the students’ data to build personas. Further information is provided in [2,32]. The results of the data analysis of the first iteration in [2] showed that there are seven main attributes of personas: age, gender, experience, physical engagement (attendance), virtual engagement (level of engagement with VLEs), educational level, and performance (grade). Figure 6 shows the proposed conceptual framework (Extended UTAUT2). Further discussion and justification for the research hypotheses are provided in the forthcoming subsections.
  • Performance Expectancy
Performance expectancy (PE) is defined as “the degree to which an individual believes that using the system will help him or her to attain gains in job performance” [27]. Prior research has identified PE as a significant predictor of BI [27,28].
Hypothesis 1 (H1): 
PE will have a positive effect on students’ BI to use chatbots.
2.
Effort Expectancy
Effort expectancy (EE) is defined as “the degree of ease associated with the use of the system” ([27], p. 450). EE and its latent variable have been shown to be significant in many research studies and proven to work as a predictor of user intention to adopt new technology [26,34,35].
Hypothesis 2 (H2): 
EE will have a positive effect on students’ BI to use chatbots.
3.
Social Influence
Social influence (SI) is defined as “the degree to which an individual perceives that important others believe he or she should use the new system” ([27], p. 451). SI was shown to be significant for specifying user intention to use technology in many studies [34,36,37].
Hypothesis 3 (H3): 
SI will have a positive effect on students’ BI to use chatbots.
4.
Facilitating Condition
Facilitating condition (FC) is defined as “the degree to which an individual believes that an organizational and technical infrastructure exists to support the use of the system” ([27], p. 453).
Hypothesis 4 (H4): 
FC will have a positive effect on students’ BI to use chatbots.
5.
Hedonic Motivation
Hedonic motivation (HM) is defined as “the fun or pleasure derived from using technology” ([26], p. 8). Studies have proven that HM plays a decisive role in determining technology acceptance and the use of technology [26,34,38].
Hypothesis 5 (H5): 
HM will have a positive effect on students’ BI to use chatbots.
6.
Habit
Habit (HT) as a construct in UTAUT2 [26] is defined in the information systems and technology context as “the extent to which people tend to perform behaviours (use IS) automatically because of learning” [39]. HT can be described in two ways: as a prior behavior [39] or as an automatic behavior [39,40]. HT has a direct and indirect effect on technology use, according to the UTAUT2 model [26,34].
Hypothesis 6 (H6): 
HT will have a positive effect on students’ BI to use chatbots.
7.
Behavioral Intention
Behavioral intention (BI) has been defined in prior research as a “function of both attitudes and subjective norms about the target behaviour, predicting actual behaviour” [41]. The strength of an individual’s commitment to engage in particular activities can be assessed by their BI [42].
Hypothesis 7 (H7): 
BI will have a positive effect on students’ BI to use chatbots.
8.
The Moderating Effects of Personas on Technology Acceptance and its Use.
This study extends the moderator with more elements, which are now part of the persona moderator. An explanation of each moderator is provided below:
(i)
Age: This is a moderator in UTAUT and UTAUT2. It has an impact on all seven core constructs that affect users’ intention to use and use of technology [43]. This study tests whether age moderates the effect of determinants on BI and the use of technology.
Hypothesis 8 (H8a1, a2, a3, a4, a5, a6): 
Age moderates the effects of PE, EE, SI, FC, HM, and HT on student BI and use of chatbot technology.
(ii)
Gender: Like the age moderator, gender is a moderator in UTAUT and UTAUT2, and also has an impact on all seven core constructs which affect users’ intention and use of technology [43]. This study will also test whether gender moderates the effect of determinants on BI and the use of technology.
Hypothesis 9 (H9b1, b2, b3, b4, b5, b6): 
Gender moderates the effects of PE, EE, SI, FC, HM, and HT on students’ BI and use of chatbot technology.
(iii)
Experience is a moderator in the UTAUT and UTAUT2 model. It is defined as mobile internet usage experience [43]. In this study, the term experience presents prior experience of using chatbots such as Siri or Amazon Alexa (as exemplars). This study will test whether experience moderates the effect of determinants on BI and the use of chatbot technology.
Hypothesis 10 (H10c1, c2, c3, c4, c5, c6): 
Experience moderates the effects of PE, EE, SI, FC, HM, and HT on students’ BI and use of chatbot technology.
(iv)
Physical engagement (represented by attendance): This is a new moderator that stemmed from our proposed persona template/model (Figure 4) as shown in the Introduction section. It is defined as an indicator of the participants’ behavioral engagement with the course being studied. This study tests whether attendance moderates the effect of determinants on BI and the use of technology.
Hypothesis 11 (H11e1, e2, e3, e4, e5, e6): 
Attendance moderates the effects of PE, EE, SI, FC, HM, and HT on students’ BI and use of chatbot technology.
(v)
Virtual engagement (represented by the level of engagement with VLEs): This is a new moderator that stemmed from our proposed persona template/model (Figure 4) as shown in the introduction section. It is defined as an indicator of behavioral engagement with the computer science course. This study tests whether virtual engagement with VLEs moderates the effect of determinants on BI and the use of technology.
Hypothesis 12 (H12f1, f2, f3, f4, f5, f6): 
Virtual engagement with VLEs moderates the effects of PE, EE, SI, FC, HM, and HT on students’ BI and use of chatbot technology.
(vi)
Educational level (year of study): This is a new moderator that represents the year of study for undergraduate students at Brunel University London. This moderator tests whether the year of study moderates the effect of determinants on BI and the use of technology. This educational level moderator stemmed from our proposed model (Figure 4).
Hypothesis 13 (H13g1, g2, g3, g4, g5, g6): 
Educational level moderates the effects of PE, EE, SI, FC, HM, and HT on students’ BI and use of chatbot technology
(vii)
Grade: This is a new moderator that represents the performance of the students, derived from our proposed model (Figure 4). It tests whether grade moderates the effect of the determinants on BI and the use of technology.
Hypothesis 14 (H14d1, d2, d3, d4, d5, d6): 
Grade moderates the effects of PE, EE, SI, FC, HM, and HT on students’ BI and use of chatbot technology

2. Materials and Methods

Sampling and Survey Administration

Before conducting the data collection, ethical approval was obtained from the ethical committee at Brunel University London. A survey was designed to achieve the aim of this study [44]. The survey was developed after reviewing state-of-the-art literature. It is important to carry out a pilot study before conducting the actual data collection in order to test the validity and reliability of the survey and to improve the format, questions, and scales [45]. A pilot study establishes the ability to answer the proposed research question and it provides face validity [46,47,48]. The sample size for the pilot study should be relatively small, a maximum of 100, according to [49]. In this case, a pilot study was carried out with 99 randomly selected computer science students. Some questions were updated and simplified following the participants’ comments. It took five months to design and build the final version of the survey; it was revised and reviewed by an expert and the researcher after the pilot study.
The adopted and extended model is referred to as the Extended UTAUT2 (Figure 6). The survey aimed to gather data on the students’ acceptance and use of chatbots at Higher Education Institutions (HEIs). The survey was divided into two sections. The first section contained questions related to demographic data and the moderators. Also, it included some questions about the type of chatbots used, how long the participants had been using them, and their level of experience Appendix A (Table A1). The second section contained questions related to the main determinants/constructs of UTAUT2, as mentioned in previous sections (PE, EE, SI, HM, FC, and HT), and BI and USE Appendix B (Table A2). The questions were supported by references.
The survey was created using the University of Bristol’s Bristol Online Survey tool, which is a free web-based survey tool. In this study, participation was purely voluntary, and the participants were informed of the study’s purpose as well as their freedom to withdraw at any time. Also, they were assured that their data would be confidential and their identities would not be revealed. The survey took less than 8 min to complete on average. The chance to win one of ten GPB 20 Amazon vouchers was offered as a participation prize to motivate people to fill out the survey. A total of 431 students answered the survey. All of the responses were complete. The Teaching Program Office (TPO) of the College of Engineering, Design and Physical Science sent weekly email reminders to undergraduate and postgraduate computer science students to complete the survey. The survey was password-protected so it could be accessed only by the targeted respondents. All of the important questions were set as mandatory in order to guarantee that there were no missing data that would affect the data analysis, especially the data analysis using SEM.
The scales used in this study were adapted from prior UTAUT2 investigations, with all constructs measured using seven items (on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree)). The items for each construct were taken from a previous study [26].

3. Results

This section covers the results of the analysis. The steps taken during the analysis are described in more detail in Appendix C (Table A6). It is important to mention that all the questions included in the survey were taken from the literature (Appendix A and Appendix B), where they have been tested and proven to be valid and reliable for measuring the constructs that they were intended to represent. More specifically, the items of the survey were adapted from UTAUT2 [26], which has been used in many studies to investigate user acceptance and the use of different types of technology. The pilot study highlighted a few minor suggestions, including the survey layout and question wording, and this confirms the face validity. The pilot study survey data were analyzed to find any potential threats or drawbacks within the survey items, in order to decide whether to keep, delete, or amend each item. It took participants a maximum of 8 min to complete the survey, which was deemed reasonable, and this confirmed the content validity. Table 1 shows the result of the analysis of the pilot study data. The table shows Cronbach’s alpha results ranging from 0.842 for HB to 0.956 for SI, showing that all constructs have outstanding reliability. This means that all the measured variables used with each construct are positively correlated. Also, the table indicates two internal consistency reliability indicators: inter-item correlation and item-to-total correlation. According to [50], the value of inter-item correlation should exceed 0.3, while item-to-total correlation should exceed 0.5. The result shows that all constructs exceed the cut-off value for inter-item correlation except for the USE construct. After examining each item of USE, it was found that USE-5 had a lower inter-item correlation (0.197); hence, USE-5 was excluded from the survey.

4. Preliminary Examination of the Main Study Data

This section provides an overview of the preliminary data analysis of the collected responses. A total of 431 responses were collected during five months from undergraduate and postgraduate computer science students. The analysis was performed using Statistical Package for the Social Science (SPSS) version 25. The preliminary data analysis included data screening and dealing with missing data and outliers, in addition to testing for normality, homogeneity, and multicollinearity in the dataset. Moreover, it covered reliability analysis, descriptive analysis, and exploratory data analysis. The results of this analysis focus on understanding undergraduate and postgraduate students’ acceptance and use of chatbots. The descriptive statistics are shown in Appendix B (Table A2).
(a) Data screening and missing data: The answers in all the questionnaires were screened for any missing values using the descriptive statistic for every measured item in the questionnaire. For greater accuracy, we compared the collected answers with the expected responses from the original questionnaire. In data analysis, missing values are considered a critical problem that affects the results of the study. The situation is even more complicated with SEM [51], as some tools such as AMOS cannot work appropriately with missing data. Furthermore, several statistical methods cannot be employed when there are missing values, such as Chi-Square, modification indices, and fit measures (i.e., goodness-of-fit-index). However, the initial screening in SPSS v 25 revealed that there were no missing data for the main elements of the model.
(b) Outliers: An outlier is defined as “observations with a unique combination of characteristics identifiable as distinctly different from the other observations” ([52], p. 73). It is critical to detect and treat outliers as they strongly bias statistical tests and may affect the normality of the data [53]. A study by [53] suggest deleting the extreme outliers while keeping the mild outliers. According to [52], there are two types of outliers: multivariate outliers and univariate outliers. For both types in this study, the results showed that there were no extreme outlier values in the dataset that needed to be removed, and a few mild outliers were kept in the database.
(c) Testing the normality assumption: In multivariate analysis, it is essential to examine the data for the presence of normality [50]. The reliability and validity of the data are affected when the data are not normally distributed. In this study, we used the Jarque–Bera (skewness–kurtosis) test to check the normality of the data. According to [54], skewness values represent the symmetry of data distribution. The data are shifted to the left with a negative skew value, while they are shifted to the right with a positive skew value. Also, the kurtosis value represents the height of the data distribution [54]. Peaked distribution comes with a positive value, while flatter distributions come with a negative value [54]. Ref. [53] recommended the normal range value for skewness–kurtosis as ±2.58. As can be seen in Table A3 (Appendix B), all items in the dataset were normally distributed, except with EE (EE1, EE3) FC (FC1, FC2), and USE (USE1, USE2 and USE3), which ranged from 3.182 to 5.743. However, the value of skewness was in the range of +0.412 to −2.470. Table A3 (Appendix B) shows the means, standard deviation, skewness, and kurtosis value for each.
Table A4 (Appendix B) shows the results for the normal distribution of the data using the Kolmogorov–Smirnov test in SPSS v 25. The results indicate that the p-values for all measured variables are 0.000 (p <0.05), confirming that the data are not normally distributed. Therefore, PLS methods were used in the analysis as they are robust to non-normally distributed data [55].
(d) Homogeneity of variance in the dataset: Homogeneity is defined as “the assumption of normality related with the supposition that dependent variable(s) display an equal variance across the number of an independent variable (s)” [53]. In multivariance analysis, it is critical to specify the presence of homogeneity of variance because it might cause an invalid estimation of the standard errors [50]. Therefore, Levene’s test (SPSS v 25) was used to check for the presence of homogeneity of variance in the collected data, as shown in Table A5 (Appendix B). The results show that all constructs were significant (p < 0.05) when using gender as a non-metric variable in the independent sample t-test (Table A5, Appendix B). The p-value for all constructs is less than 0.05 (p < 0.05). This result confirms the absence of homogeneity of variance in the collected data and suggests that variance is not equal in the proposed model for the two genders of the study cohort, i.e., male and female.
(e) Multicollinearity: Multicollinearity appears when there are two or more variables that are highly correlated with each other [54]; different scholars suggest different values as satisfactory. For example, according to [54], a correlation value of 0.7 or higher is a reason for concern, while [53] state that a correlation value over 0.8 is highly problematic. Two values determine the multicollinearity: the Variance Inflation Factor (VIF) and tolerance [54]. Multicollinearity appears when the VIF is less than 3.0 and the tolerance value is greater than 0.1. The multicollinearity check was performed on the dataset using SPSS (v 25), and given all the independent constructs, the results show that there is multicollinearity in the data because the tolerance value for all constructs is greater than 0.1, and they have VIF values less than 3.0, except for the PE construct (PE has VIF > 3.0 with all constructs except with habit), and only a few values between three and five.

4.1. Sample Descriptive Analysis

Profiles of Respondents

The descriptive analysis of the collected data using SPSS indicates that there were 233 (54.1%) male and 197 (45.7%) female participants. Participant ages were grouped into five levels, with 82.1% of the participants falling in the age groups of 18–21 and 22–25 years, and only 10% in the 26–29 years age group. The minority age groups were <18 and ≥30, with 3% and 4.9%, respectively. The target participants were either at undergraduate or postgraduate level; the majority of respondents were undergraduate students (94.2%), while only 5.8% were Master’s students. Also, the majority of students (97.7%) were full-time students, while 2.3% were part-time students. Undergraduate students were classified as follows: year one students were level 1, year two students were level 2, and placement and year three students were level 3. Over half (60%) of the respondents were at educational levels 1 and 2, while 40% were on placement or level 3. Regarding the distributions of students’ grades, the results revealed that 51% and 27.4% had been awarded grade As and Bs, respectively, while the minority of 21.6% had grades of Cs, Ds, and Fs or selected ‘not applicable/prefer not to say’.
In relation to user experience with chatbots, it is necessary to consider the chatbots being used. The survey questioned respondents on the types of chatbot they had used (Siri by Apple, Alexa by Amazon, Cortana by Microsoft, and Google Assistant by Google) (Appendix A). The participants were allowed to select more than one answer. The results show that Siri and Google Assistant were the two most popular chatbots amongst the students, while Cortana was the least popular chatbot. The results also show that other chatbots used by the students included Bixby by Samsung, S Voice, and Tmall Genie.
In terms of chatbot usage and frequency of use, the chatbot usage category revealed that the majority of students (77.3%) used chatbots, while 22.7% did not. The data on the frequency of use of chatbots showed that more than 47.7% of students were using chatbots daily or several times a day, while the rest (52.3%) used it weekly or once a month. The category of chatbot experience shows that the majority of the participants had 1–3 years’ or 3–5 years’ experience with chatbots, with 59 (35.1%) and 35 (31.8%), respectively. Just 20.7% of the students had less than one year’s experience of using chatbots, while 4% had more than five years’ experience. Approximately 30% of the respondents had some level of experience of using chatbots—they had tried and used some basic functionalities—while only 5% of respondents were not experienced at all.

4.2. Descriptive Analysis of the Main Study

This section covers the descriptive analysis of the main constructs in Extended UTAUT2. Each was assessed using a seven-point Likert scale, as follows:
(i) Performance expectancy: To assess the PE construct, four items were employed, all of which were adapted from previous work on UTAUT2 [27,28], as shown in Table A2 (Appendix B). The means for the elements associated with PE range between 4.58 (±1.882) and 5.21 (±1.734). According to the findings, the chatbots aided the students in meeting their job performance goals.
(ii) Effort expectancy: Four items were employed to measure EE, all of which were adapted from UTAUT2 [26,27], as shown in Table A2 (Appendix B). The means for each item linked to the EE construct range between 5.59 (±1.383) and 6.11 (±1.099), indicating that the majority of the participants in this study agreed that chatbots are simple to use.
(iii) Social influence: Three items taken from UTAUT2 [26,27] were used to calculate SI. The means for each item connected to the EE construct range between 3.12 (1.716) and 3.18 (1.678), indicating that most participants agreed that significant others (friends and relatives) did not believe that they should use chatbots, as shown in Table A2 (Appendix B).
(iv) Facilitating condition: FC was measured by four items that were adopted from the work of [27,56,57]. As can be seen from Table A2 (Appendix B), the means of the four items range between 5.41 (±1.569) and 6.07 (±1.216), revealing agreement on how important technological resources are to chatbot use.
(v) Hedonic motivation: HE was measured by three items that were adopted from the work of [26,27]. Table A2 (Appendix B) shows that the means for the three items that measure the HM construct range between 5.43 (±1.478) and 5.50 (± 1.458), which shows that the majority of the respondents enjoyed using chatbots.
(vi) Habit: The HT construct was measured by three items that were adopted from the work of [26,27]. Table A2 (Appendix B) presents the descriptive statistics of the HT construct. The means of the three measured variables HT1, HT2, and HT3 ranged between 3.80 (±2.325) and 4.62 (±2.209), which indicates that using a chatbot was not a habit for the students.
(vii) Behavioral intention: The BI construct was measured by three items that were adopted from [58,59,60,61]. Table A2 (Appendix B) provides a descriptive analysis of the BI construct. The means of the measured variables of BI ranged between 4.57 (±2.024) and 5.13 (±1.822). The results show that the students had a good level of agreement on BI.
(viii) Use: USE is a dependent construct in the UTAUT2 model proposed by [26]. Nine items were adopted from [26,62,63]. The descriptive analysis of the USE construct (Appendix B, Table A2) shows that the means of the measured variable for USE1 to USE9 ranged between 4.19 (±1.869) and 6.31 (±1.399). The majority of the mean values are greater than four, meaning that the students had a good level of agreement on this variable (Table A2 (Appendix B)).

4.3. Testing the Normality Assumption

4.3.1. Evaluating Sample Size

SPSS version 25 was used to conduct the analysis. The number of participants in this study is 431. To test whether this sample size is adequate for further analysis, specific tests were undertaken. The first test was to measure the sampling adequacy using KMO. KMO values range between 0 and 1. Values higher than 0.6 indicate a satisfactory sample size [64,65]. Table 2 shows the KMO value of 0.924; it indicates that the dataset is very suitable for further analysis (conceptual model). The second test is Bartlett’s Test. The Bartlett Test of Sphericity measures the relationship between variables. In the Bartlett Test, a p-value less than 0.05 is satisfactory [65], and in this study, it is less than 0.001, which means that the data are suitable for further analysis [66].

4.3.2. Model Testing/Evaluation

The reflective measurement model consists of several tests (Table 3), which include internal consistency reliability, indicator reliability, convergent validity, and discriminate validity [66]. In the first test, internal consistency reliability, a satisfactory value is higher than 0.7 [67].
(i) Internal consistency reliability and composite reliability: Usually, Cronbach’s alpha is used to test the internal consistency reliability of the measurement model. However, in PLS-SEM, the internal consistency reliability of the measurement model is evaluated using CR instead of Cronbach’s alpha [68]. Cronbach’s alpha is not suitable for PLS-SEM because it is sensitive to the number of items in the scale, and this measure is also found to generate severe underestimation when applied to PLS path models [68,69]. The composite reliability (CR) of PE, EE, SI, FC, HM, HT, BI, and USE is 0.934, 0.872, 0.952, 0.806, 0.934, 0.936, 0.938, and 0.696, respectively, indicating a high level of internal consistency reliability [68,70]. In exploratory research, satisfactory CR is achieved with a threshold level of 0.50 or higher [71], but not exceeding 0.95 [72]. Our AVE values are greater than 0.5, which is above the satisfactory criterion. The overall result values indicate that the convergent validity for all the constructs is satisfactory. Table 3 also shows that the model meets the requirement for discriminant validity efficiently. Therefore, the model could be used to test the causal relationships hypothesized.
(ii) Indicator reliability: This is examined to ensure that the latent variables accurately represent the constructs, as indicator reliability is a condition for validity. The outer loading threshold is set at 0.4. Thus, any indicator with a value that is less than 0.4 is excluded from the model [68,72]. This was so for USE2, USE3, USE6, and USE7. However, if the outer loading value ranges between 0.4 and 0.7, a loading relevance test is required to decide whether to retain or delete the indicator from the model. Five measured variables were in the range between 0.4 and 0.7: EE1, FC2, USE1, USE4, and USE5, with values of 0.543, 0.461, 0.513, 0.535, and 0.681, respectively, as shown in Table 4 and Figure 7. The loading relevance test is Cronbach’s alpha, CR, and AVE. In a loading relevance test in the PLS model, weak indicators are deleted just in case they lead to increases in a construct’s AVE and CR over the threshold (0.5). All of our indicators were retained as their outer loading exceeded the threshold [72], except for USE9. Although it has a higher value, deleting it improves the outer loadings for the other USE indicators.
(iii) Convergent Validity: The third test in the reflective measurement model is convergent validity. Convergent validity presents the model’s ability to explain the variance of the indicator. According to [73], AVE confirms convergent validity, which should be greater than 0.5 [67]. The AVE for the latent constructs BI, EE, FC, HM, HT, PE, SI, and USE was 0.834, 0.64, 0.523, 0.826, 0.83, 0.78, 0.872, and 0.374, respectively. All values are above the minimum threshold [68,71] except for USE. The CR for the latent constructs BI, EE, FC, HM, HT, PE, SI, and USE was 0.938, 0.872, 0.806, 0.934, 0.936, 0.934, 0.952, and 0.701, respectively. According to [50], the model confirms convergent validity when the AVE is greater than 0.5 and CR is higher than the AVE for all constructs [50,74]. This applies to all the constructs in this model, confirming convergent validity, as shown in Table 5.
(iv) Discriminant Validity: Discriminant validity is the last test in the measurement model. According to [67], the indicator loading value should be more than all of its cross-loadings. As shown in Table 4 and Table 5, all the indicator loadings are higher than their cross-loadings [66].

4.4. Formative Measurement

After completing the reflective measurement test, the next step was to perform a formative measurement to assess the weight and loading of the indicator. According to [75], the indicators in the measurement model have no errors associated with them. Therefore, bootstrapping is used to estimate the significance of the indicators. In this study, SmartPLS3 used 5000 bootstrap samples before providing the report [66], which is shown in Figure 8.
A structural model
R-squared (R2) shows the ability of the model to explain the phenomena, as shown in Table 3 and Figure 7. The R2 values for BI and USE are 0.917 and 0.114, respectively. BI explains 91% of the variance in the model while USE explains only 11%, which is very strong for the former and weak for the latter. The R2 of 38% can be considered as significant [67].
(i)
Hypothesis testing: Figure 8 shows the path coefficient after performing bootstrapping using SmartPLS3. As can be seen in Table 6, the results of the bootstrapping show that four hypotheses were supported, as follows: HT and BI (H6, p = 0.00); BI and USE (H7, p = 0.00); PE and BI (H1, p = 0.00); and EE and BI (H2, p = 0.018). However, three hypotheses were rejected: FC and BI (p = 0.071); HM and BI (p = 0.082); and SI and BI (p = 0.086). The four supported hypotheses will be used as the basis in iteration 3 to develop chatbot features.
(ii)
Multiple group analysis: The following sections cover the moderators’ effects on the relationships in the proposed model. These moderators are age, gender, experience, attendance, interaction with VLE, grade (performance), and educational level.
(a) Multiple Group Analysis—age moderator: Age was separated into five groups in the questionnaire as follows: <18, 18–21, 22–25, 26–29, and ≥30 years old. Before conducting Multiple Group Analysis, age was divided into two levels: less than or equal to 21 years old and greater than 21 years old. Out of the 431 participants, 244 were in the low age group (LA), while 187 were in the high age group (HA). This section investigates whether age moderates the effects of EE, FC, HT, HM, PE, and SI on BI. To support the relationship, the p-value should be <0.05 or >0.95. Table 7 shows that age moderates the effect of some relationships: BI and USE (p = 0.959, supporting H8a7), and PE and BI (p = 0.959, supporting H8a1). However, age does not moderate the effects of the other relationships, as follows: EE and BI (p = 0.497, rejecting H8a2); FC and BI (p = 0.395, rejecting H8a4); HM and BI (p = 0.105, rejecting H8a5); HT and BI (p = 0.278, rejecting H8a6); and SI and BI (p = 0.307, rejecting H8a3).
(b) Multiple Group Analysis—gender moderator: Out of 431 respondents, there were 234 males and 197 females. As can be seen in Table 8, gender moderates the relationship between HT and BI (p = 0.978, supporting H9b6) and the relationship between EE and BI (p = 0.022, supporting H9b2). However, age does not moderate the relationship between BI and USE (p = 0.766, rejecting H9a7); between FC and BI (p = 0.818, rejecting H9b4); between HM and BI (p = 0.508, rejecting H9b5); between PE and BI (p = 0.225, rejecting H9b1); or between SI and BI (p = 0.125, rejecting H9b3).
(c) Multiple Group Analysis—experience moderator: A descriptive analysis of the experience moderator shows four levels of experience. Two data categories were observed as follows: (1) no or low experience (NLE), which refers to participants with no experience or a low level of experience of using a chatbot, numbering 238 out of 431; and (2) experienced participants (E) with some experience or a high level of experience of using chatbots, numbering 168 participants. As shown in Table 9, experience moderates the effects of two relationships, which are BI and USE (p = 0.95, supporting H10c7), and SI and BI (p = 0.95, supporting H10c3). However, experience does not moderate the relationship between EE and BI (p = 0.40, rejecting H10c2); FC and BI (p = 0.80, rejecting H10c4); HM and BI (p = 0.30, rejecting H19c5); HT and BI (p = 0.10, rejecting H10c6); or PE and BI (p = 0.80, rejecting H10c1).
(d) Multiple Group Analysis—attendance: Descriptive analysis of attendance shows that the attendance rate is very high. Two groups were created: low attendance (LA) and high attendance (HA). Attendance significantly moderates the relationship between BI and USE (p = 0.048, supporting H11d7), as shown in Table 10. However, attendance does not moderate the relationship between EE and BI (p = 0.688, rejecting H11d2), the relationship between FC and BI (p = 0.804, rejecting H11d4), HM and BI (p = 0.731, rejecting H11d5), HT and BI (p = 0.433, rejecting H11d6), PE and BI (p = 0.136, rejecting H11d1) or SI and BI (p = 0.718, rejecting H11d3). In the following Table, HA refers to the high-attendance group and LA refers to the low-attendance group.
(e) Multiple Group Analysis—engagement with VLEs: A descriptive analysis of the engagement with VLEs shows a high level of engagement. The mean was 6.5 out of 7. Therefore, engagement with VLEs was divided into two groups: low engagement (<6) with only 57 participants, and high engagement (6–7) with 374 participants. The results of the multiple group analysis are presented in Table 11. Engagement with VLEs significantly moderates the relationship between FC and BI (p = 0.964, supporting H11e4). However, it does not moderate the relationship between BI and USE (p = 0.405, rejecting H11e7); the relationship between EE and BI (p = 0.466, rejecting H11e2); HM and BI (p = 0.103, rejecting H12e5); HT and BI (p = 0.288, rejecting H11e6); PE and BI (p = 0.749, rejecting H11e1); or the relationship between SI and BI (p = 0.124, rejecting H11e3).
(f) Multiple Group Analysis—educational level: Based on a descriptive analysis of the educational level, two groups were created as follows: (1) low educational level, comprising level 1 and level 2 with 238 students; and (2) high educational level, comprising placement/level 3 students, with 158 students. The remaining were Master’s students. Table 12 presents the results of the multi-group analysis. The results show that educational level has no moderating effects on any relationship, so all hypotheses were rejected. The results were as follows: the relationship between BI and USE (p = 0.87, rejecting H13F7); EE and BI (p = 0.71, rejecting H13F2); FC and BI (p = 0.81, rejecting H13F4); HM and BI (p = 0.11, rejecting H13F5); HT and BI (p = 0.81, rejecting H13F6); PE and BI (p = 0.36, rejecting H13F1); and SI and BI (p = 055, rejecting H13F3).
(g) Multiple Group Analysis—performance (grade): As indicated in Table 13, the results of the Multiple Group Analysis reveal that grade has no moderating influence on any connection. This covers the relationships between BI and USE (p = 0.216, rejecting 0H14f7); EE and BI (p = 0.158, rejecting H14f2); FC and BI (p = 0.328, rejecting H14f4); HM and BI (p = 0.521, rejecting H14f5); HT and BI (p = 0.816, rejecting H14f6); and PE and BI (p = 0.336, rejecting H14f).

5. Discussion

UTAUT2 has been used to evaluate students’ acceptance and use of technology in educational settings, with technology referring to the Learning Management System (LMS) [76], mobile-based educational applications [77], lecture capture systems [78], the MOOC platform [79], Google Classroom [80], the e-learning system [81], mobile E-textbooks [74], and mobile learning [82].
From a theoretical standpoint, this study has added to the literature base on technology adoption and acceptance models and theories by extending the UTAUT2 model to this new setting. This study examines the applicability of UTAUT2 in a fresh context (chatbots), with a new consumer (students), and in a new cultural setting (the United Kingdom), which is a significant step forward in the development of a theory. To our knowledge, no research has been conducted on students’ acceptance and use of UTAUT2 chatbots in an educational setting, specifically in UK universities. This study aims to fill this gap by investigating the acceptability and use of chatbots by undergraduate and Master’s students at a UK university.
According to some prior studies [27,34,83], performance expectancy is a crucial prerequisite for chatbot usage intent. Chatbots are used to collect information, and the best reason for students’ future use of chatbots is that they fulfill the user’s needs. Performance expectancy is the key predictor of user adoption of technology in both mandatory and voluntary settings, according to Morosan et al. [84]. HEIs should think about how to create and develop these chatbots in order to provide students with a useful tool that will help them learn more successfully. This study is in line with previous studies such as by [26,27,34,35], who found that PE has a positive effect on behavioral intention to use chatbots. This result contradicts with previous studies such as by [85], who found that PE has no effects on behavioral intention to use technology.
Effort expectancy is also an important requirement for chatbot usage intent [27]. Effort expectancy and its latent variable has been shown to be significant in many research studies and proven to work as a predictor of user intention to adopt new technology [26,34,35]. This result is in line with previous studies such as by [26,27,34]. This result contradicts the finding of previous studies such as by [35,85], where EE had no effects on behavioral intention to use chatbots.
A logical explanation for students’ future usage of chatbots is the fact that they provide them with answers to their questions in the minimum amount of time and in an easy way. Habit is a vital requirement for chatbot usage intent. Students who are familiar with chatbot technology have the habit of asking chatbots for certain information; therefore, they will be more willing to use chatbots to seek any type of information. The result of this study is in line with previous studies such as [26,27,85]. However, this result contradicts the findings of previous studies such as by [34], where habit had no effects on behavioral intention to use chatbots.
It is critical to offer specific advice regarding the function of personas in influencing students’ acceptance and use of chatbot technology in the context of online and multicultural teaching after COVID-19. Following the COVID-19 epidemic, there has been a paradigm shift in the educational scene that has resulted in an increase in online and multicultural teaching approaches. As researchers, we offer particular suggestions for how persona chatbots should be included in this changing educational environment. First and foremost, by offering individualized help and attending to each student’s unique needs, persona chatbots can improve the online learning experience by creating a more stimulating and encouraging virtual environment. It is important to carefully develop the personas that chatbots embody to reflect cultural diversity and make sure they speak to the experiences and backgrounds of a global student body. Personas should also be incorporated in accordance with educational goals, accommodating different learning methods and preferences. The deployment of chatbots should be followed by thorough training and orientation sessions to guarantee student acceptance. The relatability and efficacy of chatbots in a variety of educational contexts will be improved by this multicultural approach to provide an inclusive learning environment.
Secondly, interactive learning is important, as advocated by [86] in their study where they showed how active learning plays an important role. They emphasized the use of digital devices, particularly smart phones, as well as a range of technologies, such as LMSs, simulations, and modeling. They also demonstrated that a coherent approach to student-led interactive learning should be put into practice in real-world engineering courses. This innovative method uses the power of digital tools to improve learning overall while fostering a collaborative and engaging classroom environment. Moreover, persona chatbots are a useful tool for sustaining the momentum of online learning beyond COVID-19. They provide ongoing assistance to a wide range of learners and enable a smooth transition between in-person and virtual learning settings. These chatbots also have the ability to adjust to the changing demands of students, which helps to provide an inclusive and cutting-edge learning environment. Ultimately, the effectiveness and adoption of chatbots in virtual and multicultural learning environments are greatly influenced by the comprehension and incorporation of varied personalities in chatbot design.
In relation to user experience with chatbots, it is necessary to specify the chatbots being used and the types of interactions undertaken (active tutorship, adaptive learning, question and answer, self-assessments). In this study, the first part was covered in the survey, but the second part can be considered for future work in a different context [87]. suggest using an adaptive learning strategy to improve learning time and learner interest. This tactic involves tailoring learning routes according to each user’s past knowledge, using adaptive learning algorithms and an LMS platform. Through in vitro testing, the research seeks to validate the efficacy of this approach, with potential applications for businesses and organizations to maximize training [87].
A study by [88] investigates the integration of adaptive learning and data mining to improve e-learning with an emphasis on incorporating adaptive technologies into an open-source LMS. By evaluating data and customizing information to unique learning preferences and strengths, it allows for personalized learning routes for students. To optimize training efficacy, the system automates the selection of training materials. Plans for practical testing are included, along with a discussion of the difficulties in choosing input variables and methods. All things considered, the study provides insightful information and practical tips for enhancing e-learning with adaptive technology [88].

6. Conclusions

This study introduces a proposed extended UTAUT2 framework for understanding students’ acceptance and use of chatbots. A pilot study ensured the reliability and clarity of the survey questions. The study’s findings are twofold. Firstly, they elucidate the interactions between exogenous (PE, EE, FC, SI, HM, and HT) and endogenous (BI and USE) factors. Secondly, the role of moderators in influencing the proposed relationships is explored, encompassing age, gender, experience, educational level, grade, attendance, and interactions with VLEs. Overall, the research underscores the influence of social and organizational aspects on students’ attitudes toward chatbot technology adoption and use. The results of this study show that effort expectancy, performance expectancy, and habit emerged as pivotal predictors of student acceptance and engagement with chatbot technology. Regarding the moderators, educational level and performance have no moderating effects on any relationship in the model. However, age, experience, and attendance have a moderating effect on the relationship between BI and USE. Also, they have a moderating effect on PE, EE, and SI. Moderator importance could also direct design through inclusion in the persona design process.
Certain limitations warrant consideration, such as this study’s confined generalizability due to data collection being limited to a specific academic field (Computer Science) and geographic location (Brunel University London). To address this, future research could encompass diverse departments, universities, and global settings. Additionally, while this study predominantly utilized quantitative methods, incorporating qualitative approaches, such as interviews, could provide more comprehensive insights.
The predictive model remains open to refinement. Future investigations might incorporate additional constructs (security, trust, or system quality) and moderators (educational level or engagement level) to broaden the scope of chatbot utilization across various contexts. Embracing a mixed-methods approach, combining quantitative and qualitative methodologies, could enhance the depth of explanatory data gathered for research objectives.
Furthermore, with the inclusion of ChatGPT, chatbots could have a wide-ranging effect on online learning, bringing both possibilities and difficulties. ChatGPT will play an essential role in determining how students accept and use chatbots. Students can use it easily because of its interactive and user-friendly interface. It can answer questions, offer clarifications, and encourage participation from students. Educational institutions can run awareness campaigns emphasizing ChatGPT’s advantages over more conventional teaching techniques in order to increase acceptance.
However, at the same time, it is important to recognize ChatGPT’s limitations. Responses from ChatGPT are produced using patterns that are inferred from data, which could include biases and errors. A further drawback is that ChatGPT may provide responses that are incorrectly contextualized due to a lack of true comprehension [89]. In order to avoid bias, regular updates, ongoing monitoring, and the integration of varied datasets are recommended approaches. Integrating human oversight combining the benefits of AI with human knowledge to guarantee accurate and contextually relevant information is one possible way to address this issue. To create a helpful and morally upright learning environment, it is crucial to find a balance between utilizing ChatGPT’s advantages and resolving its drawbacks.

Author Contributions

Conceptualization, F.A.A.j.A. and D.B., methodology, F.A.A.j.A.; software, F.A.A.j.A.; investigation, F.A.A.j.A.; writing—original draft preparation, F.A.A.j.A.; data curation, F.A.A.j.A.; writing—review and editing, F.A.A.j.A., D.B. and Z.G.; supervision, F.A.A.j.A. and D.B.; project administration, F.A.A.j.A.; funding acquisition, F.A.A.j.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the University of Technology and Applied Sciences, Salalah, Oman and Ministry of Higher Education, Research and Innovation, Oman.

Institutional Review Board Statement

This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Office of the Human Research Ethics Committee of Brunel University London, UK.

Informed Consent Statement

Informed consent was obtained from all participants involved in this study.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Persona moderator (age, gender, educational level, experience, interactions with VLEs, attendance and grade).
Table A1. Persona moderator (age, gender, educational level, experience, interactions with VLEs, attendance and grade).
Age (Adapted from [81,82,90,91]):<1818–2122–2526–29≥30
Gender
(from [81,82,90]):
MaleFemale
Type of Study:Full-timePart-time
Degree:Master’s studentUndergraduate student
Educational Level (Master’s students only) adapted from [81,82,90,91]:First year Second year
Educational level (Undergraduate students only) [81,82,90,91]:Level 1Level 2Placement Level 3
Do you use a chatbot? (adapted from [89])yesNo
1 How long have you been using a chatbot? (adapted from [89]:)Less than a yearA year or more andless than 3 yearsThree years or more and less than five years5 years or more
How often do you use a chatbot? (adapted from [89])DailyWeeklyOnce a monthSeveral times a year
Experience using Chatbots (adapted from [81,89])No experienceSome experience—I have tested and tried some basic functionality of Chatbots (i.e., Siri)Experienced—I have tested and used advanced applications and content on ChatbotsVery experienced—I have developed and tested several chatbots
Select all the chatbots that you have used
Siri by AppleAlexa by AmazonCortana by
Microsoft
Google Assistant by GoogleNone of the aboveOther, please specify

Appendix B

Table A2. UTAUT2 Constructs and descriptive Statistics.
Table A2. UTAUT2 Constructs and descriptive Statistics.
Descriptive Statistics
NMeanStd. Deviation
Performance Expectancy [26]
PE1. I find chatbot/s useful in my daily life.4315.211.734
PE2. Using chatbot/s increases my chances of achieving things that are important to me.4314.581.882
PE3. Using chatbot/s helps me accomplish things more quickly.4315.181.704
PE4. Using chatbot/s increases my productivity.4314.811.796
Effort Expectancy [26]
EE1. Learning how to use a chatbot is easy for me.4316.111.099
EE2. My interaction with a chatbot is clear and understandable.4315.591.383
EE3. I find chatbot/s easy to use.43161.21
EE4. It is easy for me to become skilful at using a chatbot.4315.841.291
Social Influence [26]
SI1. People who are important to me think that I should use a chatbot.4313.151.793
SI2. People who influence my behavior think that I should use a chatbot.4313.181.678
SI3. People whose opinions that I value prefer that I use a chatbot.4313.121.716
Facilitating Condition [26]
FC1. I have the resources necessary to use a chatbot.4316.031.347
FC2. I have the knowledge necessary to use a chatbot.4316.071.216
FC3. A chatbot is compatible with other technologies I use.4315.851.368
FC4. I can get help from others when I have difficulties using a chatbot.4315.411.569
Hedonic Motivation [26]
HM1. Using a chatbot is fun.4315.51.458
HM2. Using a chatbot is enjoyable.4315.431.478
HM3. Using a chatbot is very entertaining.4315.451.576
Habit [26]
HT1. The use of chatbot/s has become a habit for me.4314.622.209
HT2. I am addicted to using a chatbot.4313.82.325
HT3. I must use a chatbot.4313.832.448
Behavioral Intention [26]
BI1. I intend to continue using a chatbot in the future.4315.131.822
BI2. I will always try to use a chatbot in my daily life.4314.572.024
BI3. I plan to continue to use a chatbot frequently.4314.841.986
USE adapted from [26]; Scale adapted from [92]
US1. Browse websites4316.311.399
US2. Search engine4316.171.252
US3. Mobile e-mail (i.e Brunel email)4315.851.409
US4. SMS (Short Messaging Service)4315.341.685
US5. MMS (Multimedia Messaging Service)4314.672.076
US6. Blackboard access4315.21.577
US7. An online check of study timetable4315.091.616
US8. Events reminders setting on mobile phone4314.871.727
US9. University event or workshop check4314.191.869
Table A3. Skewness and kurtosis.
Table A3. Skewness and kurtosis.
Normality
NMeanStd. DeviationSkewnessKurtosis
StatisticStatisticStatisticStatisticStd. ErrorStatisticStd. Error
PE1. I find chatbot/s useful in my daily life.4315.211.734−0.8130.118−0.2770.235
PE2. Using chatbot/s increases my chances of achieving things that are important to me.4314.581.882−0.4000.118−1.0660.235
PE3. Using chatbot/s helps me accomplish things more quickly.4315.181.704−0.9120.118−0.0570.235
PE4. Using chatbot/s increases my productivity.4314.811.796−0.5620.118−0.7930.235
EE1. Learning how to use a chatbot is easy for me.4316.111.099−1.8320.1184.5390.235
EE2. My interaction with a chatbot is clear and understandable.4315.591.383−1.2120.1181.3230.235
EE3. I find chatbot/s easy to use.4316.001.210−1.6670.1183.1820.235
EE4. It is easy for me to become skilful at using a chatbot.4315.841.291−1.4220.1182.2070.235
SI1. People who are important to me think that I should use a chatbot.4313.151.7930.4120.118−0.7470.235
SI2. People who influence my behaviour think that I should use a chatbot.4313.181.6780.3890.118−0.6640.235
SI3. People whose opinions that I value prefer that I use a chatbot.4313.121.7160.4100.118−0.7600.235
FC1. I have the resources necessary to use a chatbot.4316.031.347−1.7970.1183.3300.235
FC2. I have the knowledge necessary to use a chatbot.4316.071.216−2.0750.1185.3080.235
FC3. A chatbot is compatible with other technologies I use.4315.851.368−1.4940.1182.1200.235
FC4. I can get help from others when I have difficulties using a chatbot.4315.411.569−0.9510.1180.3200.235
HM1. Using a chatbot is fun.4315.501.458−1.2360.1181.3040.235
HM2. Using a chatbot is enjoyable.4315.431.478−1.1480.1181.0650.235
HM3. Using a chatbot is very entertaining.4315.451.576−1.1000.1180.7080.235
HT1. The use of chatbot/s has become a habit for me.4314.622.209−0.4850.118−1.2400.235
HT2. I am addicted to using a chatbot.4313.802.325−0.0430.118−1.6500.235
HT3. I must use a chatbot.4313.832.4480.0430.118−1.6620.235
BI1. I intend to continue using a chatbot in the future.4315.131.822−0.9690.1180.0150.235
BI2. I will always try to use a chatbot in my daily life.4314.572.024−0.4850.118−1.0270.235
BI3. I plan to continue to use a chatbot frequently.4314.841.986−0.6870.118−0.6990.235
US1. Browse websites4316.311.399−2.4700.1185.6750.235
US2. Search engine4316.171.252−2.2690.1185.7430.235
US3. Mobile e-mail (i.e Brunel email)4315.851.409−1.8620.1183.8450.235
US4. SMS (Short Messaging Service)4315.341.685−1.1110.1180.5850.235
US5. MMS (Multimedia Messaging Service)4314.672.076−0.6240.118−0.9090.235
US6. Blackboard access4315.201.577−1.0790.1180.8940.235
US7. An online check of study timetable4315.091.616−0.9360.1180.4490.235
US8. Events reminders setting on mobile phone4314.871.727−0.5790.118−0.4250.235
US9. University event or workshop check4314.191.869−0.1910.118−0.9510.235
Valid N (list wise)431
Table A4. Normality of data.
Table A4. Normality of data.
NNormal Parameters Most Extreme Differences Test StatisticAsymp. Sig. (2-Tailed)
MeanStd. DeviationAbsolutePositiveNegative
PE1. I find chatbot/s useful in my daily life.4315.211.7340.1920.151−0.1920.1920.000
PE2. Using chatbot/s increases my chances of achieving things that are important to me.4314.581.8820.2040.114−0.2040.2040.000
PE3. Using chatbot/s helps me accomplish things more quickly.4315.181.7040.2240.142−0.2240.2240.000
PE4. Using chatbot/s increases my productivity.4314.811.7960.1970.111−0.1970.1970.000
EE1. Learning how to use a chatbot is easy for me.4316.111.0990.2660.210−0.2660.2660.000
EE2. My interaction with a chatbot is clear and understandable.4315.591.3830.2460.154−0.2460.2460.000
EE3. I find chatbot/s easy to use.4316.001.2100.2700.203−0.2700.2700.000
EE4. It is easy for me to become skilful at using a chatbot.4315.841.2910.2670.185−0.2670.2670.000
SI1. People who are important to me think that I should use a chatbot.4313.151.7930.1560.156−0.1520.1560.000
SI2. People who influence my behaviour think that I should use a chatbot.4313.181.6780.1790.179−0.1610.1790.000
SI3. People whose opinions that I value prefer that I use a chatbot.4313.121.7160.1820.182−0.1480.1820.000
FC1. I have the resources necessary to use a chatbot.4316.031.3470.2640.236−0.2640.2640.000
FC2. I have the knowledge necessary to use a chatbot.4316.071.2160.2870.222−0.2870.2870.000
FC3. A chatbot is compatible with other technologies I use.4315.851.3680.2670.201−0.2670.2670.000
FC4. I can get help from others when I have difficulties using a chatbot.4315.411.5690.2120.156−0.2120.2120.000
Faciliating Condition4315.84111.112810.1860.149−0.1860.1860.000
HM1. Using a chatbot is fun.4315.501.4580.2670.152−0.2670.2670.000
HM2. Using a chatbot is enjoyable.4315.431.4780.2410.144−0.2410.2410.000
HM3. Using a chatbot is very entertaining.4315.451.5760.2270.162−0.2270.2270.000
PV1. A chatbot is reasonably priced.4314.171.9460.1450.118−0.1450.1450.000
PV2. A chatbot is good value for the money.4314.101.8900.1290.129−0.1280.1290.000
PV3. At the current price, the chatbot provides good value.4314.191.9070.1240.122−0.1240.1240.000
Price Value4314.15161.825660.1080.108−0.1030.1080.000
HT1. The use of chatbot/s has become a habit for me.4314.622.2090.2090.141−0.2090.2090.000
HT2. I am addicted to using a chatbot.4313.802.3250.2110.206−0.2110.2110.000
HT3. I must use a chatbot.4313.832.4480.2080.208−0.1880.2080.000
BI1. I intend to continue using a chatbot in the future.4315.131.8220.2160.153−0.2160.2160.000
BI2. I will always try to use a chatbot in my daily life.4314.572.0240.1980.115−0.1980.1980.000
BI3. I plan to continue to use a chatbot frequently.4314.841.9860.2050.139−0.2050.2050.000
Behavior Intention4314.85071.833690.1660.121−0.1660.1660.000
US1. Browse websites4316.311.3990.3850.311−0.3850.3850.000
US2. Search engine4316.171.2520.2820.253−0.2820.2820.000
US3. Mobile e-mail (i.e Brunel email)4315.851.4090.2410.207−0.2410.2410.000
US4. SMS (Short Messaging Service)4315.341.6850.2170.162−0.2170.2170.000
US5. MMS (Multimedia Messaging Service)4314.672.0760.2000.130−0.2000.2000.000
US6. Blackboard access4315.201.5770.1960.127−0.1960.1960.000
US7. An online check of study timetable4315.091.6160.1800.119−0.1800.1800.000
US8. Events reminders setting on mobile phone4314.871.7270.1610.109−0.1610.1610.000
US9. University event or workshop check4314.191.8690.1280.097−0.1280.1280.000
Table A5. Data normality check.
Table A5. Data normality check.
NNormal Parameters Most Extreme Differences Test StatisticAsymp. Sig. (2-Tailed)
MeanStd. DeviationAbsolutePositiveNegative
PE1. I find chatbot/s useful in my daily life.4315.211.7340.1920.151−0.1920.1920.000
PE2. Using chatbot/s increases my chances of achieving things that are important to me.4314.581.8820.2040.114−0.2040.2040.000
PE3. Using chatbot/s helps me accomplish things more quickly.4315.181.7040.2240.142−0.2240.2240.000
PE4. Using chatbot/s increases my productivity.4314.811.7960.1970.111−0.1970.1970.000
EE1. Learning how to use a chatbot is easy for me.4316.111.0990.2660.210−0.2660.2660.000
EE2. My interaction with a chatbot is clear and understandable.4315.591.3830.2460.154−0.2460.2460.000
EE3. I find chatbot/s easy to use.4316.001.2100.2700.203−0.2700.2700.000
EE4. It is easy for me to become skillful at using a chatbot.4315.841.2910.2670.185−0.2670.2670.000
SI1. People who are important to me think that I should use a chatbot.4313.151.7930.1560.156−0.1520.1560.000
SI2. People who influence my behaviour think that I should use a chatbot.4313.181.6780.1790.179−0.1610.1790.000
SI3. People whose opinions that I value prefer that I use a chatbot.4313.121.7160.1820.182−0.1480.1820.000
FC1. I have the resources necessary to use a chatbot.4316.031.3470.2640.236−0.2640.2640.000
FC2. I have the knowledge necessary to use a chatbot.4316.071.2160.2870.222−0.2870.2870.000
FC3. A chatbot is compatible with other technologies I use.4315.851.3680.2670.201−0.2670.2670.000
FC4. I can get help from others when I have difficulties using a chatbot.4315.411.5690.2120.156−0.2120.2120.000
Facilitating Condition4315.84111.112810.1860.149−0.1860.1860.000
HM1. Using a chatbot is fun.4315.501.4580.2670.152−0.2670.2670.000
HM2. Using a chatbot is enjoyable.4315.431.4780.2410.144−0.2410.2410.000
HM3. Using a chatbot is very entertaining.4315.451.5760.2270.162−0.2270.2270.000
PV1. A chatbot is reasonably priced.4314.171.9460.1450.118−0.1450.1450.000
PV2. A chatbot is good value for the money.4314.101.8900.1290.129−0.1280.1290.000
PV3. At the current price, the chatbot provides good value.4314.191.9070.1240.122−0.1240.1240.000
Price Value4314.15161.825660.1080.108−0.1030.1080.000
HT1. The use of chatbot/s has become a habit for me.4314.622.2090.2090.141−0.2090.2090.000
HT2. I am addicted to using a chatbot.4313.802.3250.2110.206−0.2110.2110.000
HT3. I must use a chatbot.4313.832.4480.2080.208−0.1880.2080.000
BI1. I intend to continue using a chatbot in the future.4315.131.8220.2160.153−0.2160.2160.000
BI2. I will always try to use a chatbot in my daily life.4314.572.0240.1980.115−0.1980.1980.000
BI3. I plan to continue to use a chatbot frequently.4314.841.9860.2050.139−0.2050.2050.000
Behavior Intention4314.85071.833690.1660.121−0.1660.1660.000
US1. Browse websites4316.311.3990.3850.311−0.3850.3850.000
US2. Search engine4316.171.2520.2820.253−0.2820.2820.000
US3. Mobile e-mail (i.e Brunel email)4315.851.4090.2410.207−0.2410.2410.000
US4. SMS (Short Messaging Service)4315.341.6850.2170.162−0.2170.2170.000
US5. MMS (Multimedia Messaging Service)4314.672.0760.2000.130−0.2000.2000.000
US6. Blackboard access4315.201.5770.1960.127−0.1960.1960.000
US7. An online check of study timetable4315.091.6160.1800.119−0.1800.1800.000
US8. Events reminders setting on mobile phone4314.871.7270.1610.109−0.1610.1610.000
US9. University event or workshop check4314.191.8690.1280.097−0.1280.1280.000

Appendix C

Table A6. Analysis steps.
Table A6. Analysis steps.
Analysis StepAim Description
Cronbach’s Alpha, inter-item correlation and item-to-total correlation for the pilot study To measure positivity of variables used with each construct, to ensure that all constructs have the required reliabilityThe value of inter-item correlation should exceed 0.3, while item-to-total correlation should exceed 0.5.
Descriptive statistics Overview of the preliminary data analysis of the collected data Acquire further details about the collected data (descriptive—frequencies).
(a) Data screening and missing data To ensure no missing values in the collected data Missing values prove problematic when using SEM.
(b) Outlier To identify any outlier values as they bias the statistical test It is critical to detect and treat outliers as they bias statistical tests and may affect the normality of the data [53].
(c) Testing the normality assumption To ensure that data are normally distributed The reliability and validity of the data are affected when the data are not normally distributed.
(d) Homogeneity of variance in the dataset Homogeneity is defined as “the assumption of normality related with the supposition that dependent variable(s) display an equal variance across the number of an independent variable (s)” [53]In multivariance analysis, it is critical to specify the presence of homogeneity of variance because it might cause invalid estimation of the standard errors [67].
(e) Multicollinearity Multicollinearity appears when there are two or more variables that are highly correlated to each other [54]Different scholars have suggested different values as satisfactory. For example, according to [54], a correlation value of 0.7 or higher is a reason for concern. [53] state that a correlation value over 0.8 is highly problematic.
Descriptive analysis of the main study Providing a foundational understanding of the data at handUsed to understand data distribution and summarize large datasets.
Evaluating sample size using KMO To test whether the sample size is adequate for further analysisKMO values range between 0 and 1. Values higher than 0.6 indicate satisfactory sample size [64,65].
Internal consistency reliability and composite reliability In the Partial Least Squares Structural Equation Modeling (PLS-SEM) approach, the internal consistency reliability of the measurement model is evaluated using composite reliability (CR) instead of Cronbach’s alpha [68].In exploratory research, satisfactory composite reliability is achieved with a threshold level of 0.60 or higher, according to [71].
Indicator Reliability To ensure that the latent variables accurately represent the constructs, indicator reliability is examined as a condition for validityThe outer loading threshold is set at 0.4; therefore, any indicator with a value less than 0.4 is excluded from the model [68,72].
Convergent Validity Convergent validity reflects the model’s ability to explain the variance of its indicatorsAs per [73], average variance extracted (AVE) confirms convergent validity, which is satisfactory at values greater than 0.5 [67].
Discriminant Validity To ensure the measures are truly reflective of the unique constructs they are intended to assess, thus supporting the reliability, accuracy, and theoretical integrity of the research findingAccording to [67], the indicator loading value should be greater than all of its cross-loadings.
Formative measure Structural Model using R2To show the ability of the model to explain the phenomena R-squared (R2) is used to achieve this.
Multiple Group Analysis To study the moderators’ effects on moderating the relationship in the proposed modelThese moderators are age, gender, experience, attendance, interaction with VLE, performance (grade), and educational level.

References

  1. Knill, O.; Carlsson, J.; Chi, A.; Lezama, M. An Artificial Intelligence Experiment in College Math Education. 2004. Available online: http://www.math.harvard.edu/∼knill/preprints/sofia.pdf (accessed on 25 May 2024).
  2. Almahri, F.; Bell, D.; Arzoky, M. Augmented Education within a Physical SPACE; UK Academy for Information Systems: Oxford, UK, 2019; pp. 1–12. [Google Scholar]
  3. Vaishnavi, V.; Kuechler, B. Design Science Research in Information Systems; Association for Information Systems; Springer: Berlin/Heidelberg, Germany, 2004; p. 45. [Google Scholar] [CrossRef]
  4. Cooper, A. The Inmates Are Running the Asylum; Sams Publishing: Carmel, IN, USA, 2004. [Google Scholar] [CrossRef]
  5. Putnam, C.; Kolko, B.; Wood, S. Communicating about users in ICTD. In Proceedings of the Fifth International Conference on Information and Communication Technologies and Development—ICTD ’12, Atlanta, GA, USA, 12–15 March 2012; p. 338. [Google Scholar] [CrossRef]
  6. Cabrero, D.G. Participatory design of persona artefacts for user eXperience in non-WEIRD cultures. In Proceedings of the 13th Participatory Design Conference on Short Papers, Industry Cases, Workshop Descriptions, Doctoral Consortium Papers, and Keynote Abstracts—PDC ’14, Windhoek, Namibia, 6–10 October 2014; Volume 2, pp. 247–250. [Google Scholar] [CrossRef]
  7. Vredenburg, K.; Mao, J.-Y.; Smith, P.W.; Carey, T. A survey of User-Centered Design Practice. In Proceedings of the SIGCHI Conference, Minneapolis, MN, USA, 20–25 April 2002; p. 471. [Google Scholar] [CrossRef]
  8. Miaskiewicz, T.; Kozar, K.A. Personas and user-centered design: How can personas benefit product design processes? Des. Stud. 2011, 32, 417–430. [Google Scholar] [CrossRef]
  9. Gulliksen, J.; Göransson, B.; Boivie, I.; Blomkvist, S.; Persson, J.; Cajander, Å. Key principles for user-centred systems design. Behav. Inf. Technol. 2003, 22, 397–409. [Google Scholar] [CrossRef]
  10. Dahl, D.W.; Chattopadhyay, A.; Gorn, G.J. The Use of Visual Mental Imagery in New Product Design. J. Mark. Res. 2006, 36, 18. [Google Scholar] [CrossRef]
  11. Friess, E. Personas and decision making in the design process. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems—CHI ’12, Austin, TX, USA, 5–10 May 2012; p. 1209. [Google Scholar] [CrossRef]
  12. McGinn, J.; Kotamraju, N. Data-driven persona development. In Proceeding of the Twenty-Sixth Annual CHI Conference on Human Factors in Computing Systems—CHI ’08, Florence, Italy, 5–10 April 2008; p. 1521. [Google Scholar] [CrossRef]
  13. Adlin, T.; Pruitt, J.; Goodwin, K.; Hynes, C.; McGrane, K.; Rosenstein, A.; Muller, M.J. Panel: Putting Personas to Work. In Proceedings of the CHI EA ‘06: CHI ‘06 Extended Abstracts on Human Factors in Computing Systems, Montréal, QC, Canada, 22–27 April 2006; pp. 13–16. [Google Scholar] [CrossRef]
  14. Blooma, J.; Methews, N.; Nelson, L. Proceedings of the 4th International Conference on Information Systems Management and Evaluation; Academic Conferences and Publishing International Limited: Reading, UK, 2013. [Google Scholar] [CrossRef]
  15. Nielsen, L.; Hansen, K.S.; Stage, J.; Billestrup, J. A template for design personas: Analysis of 47 persona descriptions from Danish industries and organizations. Int. J. Sociotechnol. Knowl. Dev. 2015, 7, 45–61. [Google Scholar] [CrossRef]
  16. Blomquist, Å.; Arvola, M. Personas in action: Ethnography in an interaction design team. In Proceedings of the NordiCHI ’02: Proceedings of the Second Nordic Conference on Human-Computer Interaction, Aarhus, Denmark, 19–23 October 2002; p. 197. [Google Scholar]
  17. Sinha, R. Persona development for information-rich domains. In Proceedings of the CHI ’03: CHI ’03 Extended Abstracts on Human Factors in Computing Systems, Ft. Lauderdale, FL, USA, 5–10 April 2003; pp. 830–831. [Google Scholar] [CrossRef]
  18. Salminen, J.; Sengün, S.; Jung, S.-G.; Jansen, B.J. Design Issues in Automatically Generated Persona Profiles: A Qualitative Analysis from 38 Think-Aloud Transcripts. In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval, Glasgow, UK, 10–14 March 2019; pp. 225–229. [Google Scholar] [CrossRef]
  19. Shafeie, S.; Mohamed, M.; Issa, T.B.; Chaudhry, B.M. Using Machine Learning to Model Potential Users with Health Risk Concerns Regarding Microchip Implants. In Proceedings of the International Conference on Human-Computer Interaction, Copenhagen, Denmark, 23–28 July 2023; Springer Nature Switzerland: Cham, Switzerland, 2023; pp. 574–592. [Google Scholar]
  20. Pruitt, J.; Grundin, J. Personas: Practice and Theory. In Proceedings of the 2003 Conference on Designing for User Experiences, San Francisco, CA, USA, 6–7 June 2003; pp. 1–15. [Google Scholar] [CrossRef]
  21. Nieters, J.; Ivaturi, S.; Ahmed, I. Making Personas Memorable. In Proceedings of the CHI’07 Extended Abstracts on Human Factors in Computing Systems, San Jose, CA, USA, 28 April–3 May 2007; pp. 1817–1823. [Google Scholar]
  22. Lee, M.K.; Kiesler, S.; Forlizzi, J. Receptionist or information kiosk: How do people talk with a robot ? In Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work—CSCW ’10, Savannah, GA, USA, 6–10 February 2010; p. 31. [Google Scholar] [CrossRef]
  23. Vandenberghe, B. Bot personas as off-the-shelf users. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 782–789. [Google Scholar]
  24. Zhang, X.; Brown, H.-F.; Shankar, A. Data-Driven Personas: Constructing Archetypal Users with Clickstreams and User Telemetry. In Proceedings of the 2016 CHI Conference on Human Factors in Computing System, San Jose, CA, USA, 7–12 May 2016. [Google Scholar] [CrossRef]
  25. Ketamo, H.; Kiili, K.; Alajääski, J. Reverse market segmentation with personas. In Proceedings of the WEBIST 2010—Proceedings of the 6th International Conference on Web Information Systems and Technology—Volume 2: WEBIST, Valencia, Spain, 7–10 April 2010; pp. 63–68. [Google Scholar] [CrossRef]
  26. Venkatesh, V.; Thong, J.Y.L.T.; Xu, X. Consumer acceptance and use of IT. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  27. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  28. Carter, L.; Weerakkody, V. E-government adoption: A cultural comparison. Inf. Syst. Front. 2008, 10, 473–482. [Google Scholar] [CrossRef]
  29. Abu-Shanab, E.; Pearson, M. Internet banking in Jordan: An Arabic instrument validation process. Int. Arab. J. Inf. Technol. 2009, 6, 235–244. [Google Scholar]
  30. Yenyuen, Y.; Yeow, P.H.P. User Acceptance of Internet Banking Service in Malaysia. In Web Information Systems and Technologies; Springer: Berlin Heidelberg, Germany, 2009; Volume 18, p. 295. [Google Scholar] [CrossRef]
  31. Williams, M.; Rana, N.; Dwivedi, Y.; Lal, B.; Association for Information Systems AIS Electronic Library (AISeL). Is Utaut Really Used or Just Cited for the Sake of It? A Systematic Review of Citations of Utaut’ S Originating Article. 2011. Available online: https://aisel.aisnet.org/ecis2011/231/ (accessed on 25 May 2024).
  32. Almahri, F.; Bell, D.; Arzoky, M. Personas Design for Conversational Systems in Education. Informatics 2019, 6, 46. [Google Scholar] [CrossRef]
  33. Almahri, F. Persona Design for Educational Chatbots. Doctoral Dissertation, Brunel University, London, UK, 2021. [Google Scholar]
  34. Raman, A.; Don, Y. Preservice teachers’ acceptance of learning management software: An application of the UTAUT2 model. Int. Educ. Stud. 2013, 6, 157–164. [Google Scholar] [CrossRef]
  35. Zhou, T.; Lu, Y.; Wang, B. Integrating TTF and UTAUT to explain mobile banking user adoption. Comput. Hum. Behav. 2010, 26, 760–767. [Google Scholar] [CrossRef]
  36. Moore, G.; Benbasa, I. Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation. Qual. High. Educ. 1991, 15, 192–222. [Google Scholar] [CrossRef]
  37. Thompson, R.L.; Higgins, C.A.; Howell, J.M. Personal Computing: Toward a Conceptual Model of Utilization. MIS Q. 1991, 15, 125–143. [Google Scholar] [CrossRef]
  38. Brown, S.; Venkatesh, V. Model of Adoption of Technology in Households: A Baseline Model Test and Extension Incorporating Household Life Cycle. MIS Q. 2005, 29, 399–426. [Google Scholar] [CrossRef]
  39. Limayem, M.; Hirt, S.G.; Cheung, C.M.K. How Habit Limits the Predictive Power of Intention: The Case of Information Systems Continuance. MIS Q. 2007, 31, 705–737. [Google Scholar] [CrossRef]
  40. Kim, S.S.; Malhotra, N.K. A Longitudinal Model of Continued IS Use: An Integrative View of Four Mechanisms Underlying Postadoption Phenomena. Manag. Sci. 2005, 51, 741–755. [Google Scholar] [CrossRef]
  41. Pickett, L.L.; Ginsburg, H.J.; Mendez, R.V.; Lim, D.E.; Blankenship, K.R.; Foster, L.E.; Lewis, D.H.; Ramon, S.W.; Saltis, B.M.; Sheffield, S.B. Ajzen’s Theory of Planned Behavior as it Relates to Eating Disorders and Body Satisfaction. N. Am. J. Psychol. 2012, 14, 339–354. [Google Scholar]
  42. Lewis, C.C.; Fretwell, C.E.; Ryan, J.; Parham, J.B. Faculty Use of Established and Emerging Technologies in Higher Education: A Unified Theory of Acceptance and Use of Technology Perspective. Int. J. High. Educ. 2013, 2, 22–34. [Google Scholar] [CrossRef]
  43. Fuksa, M. Mobile technologies and services development impact on mobile internet usage in Latvia. Procedia Comput. Sci. 2013, 26, 41–50. [Google Scholar] [CrossRef]
  44. Saunders, M.; Lewis, P.; Thornhill, A. Research Methods for Business Students; Prentice Hall/Financial Times: Maldon, UK, 2009. [Google Scholar]
  45. Creswell, J.W. Research Design: Qualitative, Quantitative, and Mixed Method Approaches; SAGE Publications: New York, NY, USA, 2014; p. 273. [Google Scholar]
  46. Presser, S.; Couper, M.P.; Lessler, J.T.; Martin, E.; Martin, J.; Rothgeb, J.M.; Singer, E. Methods for Testing and Evaluating Survey Questions. Public Opin. Q. 2004, 68, 109–130. [Google Scholar] [CrossRef]
  47. Sekaran, U.; Bougie, R. Research Methods for Business: A Skill Building Approach, 5th ed.; Wiley India Pvt. Ltd.: New Delhi, India, 2011. [Google Scholar]
  48. Zikmund, W.G.; Babin, B.J.; Carr, J.C.; Griffin, M. Business Research Methods; South-Western Cengage Learning: Mason, OH, USA, 2000. [Google Scholar]
  49. Nargundkar, R. Marketing Research-Text & Cases 2E; Tata McGraw-Hill Education: New York, NY, USA, 2003. [Google Scholar]
  50. Murtagh, F.; Heck, A. Multivariate Data Analysis; Prentice-Hall: Hoboken, NJ, USA, 2010. [Google Scholar]
  51. Arbuckle, J. Amos 18 User’s Guide; SPSS Incorporated: Chicago, IL, USA, 2009. [Google Scholar]
  52. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E.; Tatham, R.L. Multivariate Data Analysis; Prentice Hall: Hoboken, NJ, USA; Pearson: London, UK, 2006. [Google Scholar]
  53. Tabachnick, B.G.; Fidell, L.S. Using Multivariate Statistics (980); Pearson: London, UK, 2007. [Google Scholar]
  54. Pallant, J. SPSS Survival Manual: A Step by Step Guide to Data Analysis Using SPSS; McGrowHill: New York, NY, USA, 2010. [Google Scholar]
  55. de, S. Abrahão, R.; Moriguchi, S.N.; Andrade, D.F. Intention of adoption of mobile payment: An analysis in the light of the Unified Theory of Acceptance and Use of Technology (UTAUT). RAI Rev. Adm. Inovação 2016, 13, 221–230. [Google Scholar] [CrossRef]
  56. Teo, T. The impact of subjective norm and facilitating conditions on pre-service teachers’ attitude toward computer use: A structural equation modeling of an extended technology acceptance model. J. Educ. Comput. Res. 2009, 40, 89–109. [Google Scholar] [CrossRef]
  57. Maldonado, U.P.T.; Khan, G.F.; Moon, J.; Rho, J.J. E-learning motivation and educational portal acceptance in developing countries. Online Inf. Rev. 2009, 35, 66–85. [Google Scholar] [CrossRef]
  58. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. Manag. Inf. Syst. 1989, 13, 319–339. [Google Scholar] [CrossRef]
  59. Moon, J.W.; Kim, Y.G. Extending the TAM for a World-Wide-Web context. Inf. Manag. 2001, 38, 217–230. [Google Scholar] [CrossRef]
  60. Chang, S.C.; Tung, F.C. An empirical investigation of students’ behavioural intentions to use the online learning course websites. Br. J. Educ. Technol. 2008, 39, 71–83. [Google Scholar] [CrossRef]
  61. Park, S.Y. An analysis of the technology acceptance model in understanding University students’ behavioral intention to use e-Learning. Educ. Technol. Soc. 2009, 12, 150–162. [Google Scholar]
  62. Mayisela, T. The potential use of mobile technology: Enhancing accessibility and communication in a blended learning course. S. Afr. J. Educ. 2013, 33, 1–18. [Google Scholar] [CrossRef]
  63. Özgür, H. Adapting the media and technology usage and attitudes scale to Turkish. Kuram Uygulamada Egit. Bilim. 2016, 16, 1711–1735. [Google Scholar] [CrossRef]
  64. Brace, N.; Kemp, R.; Snelgar, R. SPSS for Psychologists: A Guide to Data Analysis Using SPSS for Windows; Palgrave Macmillan: New York, NY, USA, 2003. [Google Scholar]
  65. Hinton, P.R.; McMurray, I.; Brownlow, C.; Cozens, B. SPSS Explained Perry; Routledge: London, UK, 2004. [Google Scholar]
  66. Pheeraphuttharangkoon, S. The Adoption, Use and Diffusion of Smartphones among Adults over Fifty in the UK; University of Hertfordshire: Hatfield, UK, 2015. [Google Scholar]
  67. Hair, J.F.; Ringle, C.M.; Sarstedt, M. PLS-SEM: Indeed a silver bullet. J. Mark. Theory Pract. 2011, 19, 139–151. [Google Scholar] [CrossRef]
  68. Wong, K.K.-K. Mediation Analysis, Categorical Moderation Analysis, and Higher-Order Constructs Modeling in Partial Least Squares Structural Equation Modeling (PLS-SEM): A B2B Example Using SmartPLS. Mark. Bull. 2016, 26, 1–22. [Google Scholar] [CrossRef]
  69. Werts, C.E.; Linn, R.L.; Joreskog, K.G. Intraclass reliability estimates: Testing structural assumptions. Measurement 1974, 33, 25–33. [Google Scholar] [CrossRef]
  70. Nunnally, J.; Bernstein, I. Psychometric Theory; MacGraw-Hill: New York, NY, USA, 1995. [Google Scholar]
  71. Bagozzi, R.; Yi, Y. On the Evaluation of Structure Equation Models.pdf. J. Acad. Mark. Sci. 1988, 16, 74–94. [Google Scholar] [CrossRef]
  72. Hair, J.; Hult, G.T.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Sequares Structural Equation Modeling (PLS-SEM); SAGE Publication: New York, NY, USA, 2013. [Google Scholar]
  73. Fornell, C.; Larcker, D.F. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J. Mark. Res. 1981, 18, 39. [Google Scholar] [CrossRef]
  74. Bhimasta, R.A. An Empirical Investigation of Student Adoption Model toward Mobile E-Textbook: UTAUT2 and TTF Model. In Proceedings of the 2nd International Conference on Communication and Information Processing, Singapore, 26 November 2016. [Google Scholar]
  75. Tarhini, A. The Effects of Individual-Level Culture and Demographic Characteristics on E-Learning Acceptance in Lebanon and England: A Structural Equation Modelling Approach. SSRN Electron. J. 2013. [Google Scholar] [CrossRef]
  76. North-Samardzic, A.; Jiang, B. Acceptance and use of Moodle by students and academics. In Proceedings of the 2015 Americas Conference on Information Systems, AMCIS, Fajardo, Puerto Rico, 13–15 August 2015. [Google Scholar]
  77. Ameri, A.; Khajouei, R.; Ameri, A.; Jahani, Y. Acceptance of a mobile-based educational application (LabSafety) by pharmacy students: An application of the UTAUT2 model. Educ. Inf. Technol. 2019, 25, 419–435. [Google Scholar] [CrossRef]
  78. Farooq, M.S.; Salam, M.; Jaafar, N.; Fayolle, A.; Ayupp, K.; Radovic-Markovic, M.; Sajid, A. Acceptance and use of lecture capture system (LCS) in executive business studies: Extending UTAUT2. Interact. Technol. Smart Educ. 2017, 14, 329–348. [Google Scholar] [CrossRef]
  79. Mafraq, H.; Kotb, Y. Maarefh—Proposed MOOCs’ platform for Saudi Arabia’s higher education institutions. In Proceedings of the ACM International Conference Proceeding Series, Part F1483, Aizu-Wakamatsu, Japan, 29–31 March 2019; pp. 77–82. [Google Scholar] [CrossRef]
  80. Jakkaew, P.; Hemrungrote, S. The use of UTAUT2 model for understanding student perceptions using Google Classroom: A case study of Introduction to Information Technology course. In Proceedings of the 2nd Joint International Conference on Digital Arts, Media and Technology 2017: Digital Economy for Sustainable Growth, ICDAMT 2017, Chiang Mai, Thailand, 1–4 March 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 205–209. [Google Scholar] [CrossRef]
  81. El-Masri, M.; Tarhini, A. Factors affecting the adoption of e-learning systems in Qatar and USA: Extending the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). Educ. Technol. Res. Dev. 2017, 65, 743–763. [Google Scholar] [CrossRef]
  82. Yang, S. Understanding Undergraduate Students’ Adoption of Mobile Learning Model: A Perspective of the Extended UTAUT2. J. Converg. Inf. Technol. 2013, 8, 969–979. [Google Scholar] [CrossRef]
  83. Melián-González, S.; Gutiérrez-Taño, D.; Bulchand-Gidumal, J. Predicting the intentions to use chatbots for travel and tourism. Curr. Issues Tour. 2019, 24, 192–210. [Google Scholar] [CrossRef]
  84. Morosan, C.; DeFranco, A. It’s about time: Revisiting UTAUT2 to examine consumers’ intentions to use NFC mobile payments in hotels. Int. J. Hosp. Manag. 2016, 53, 17–29. [Google Scholar] [CrossRef]
  85. Almahri, F.; Salem, I.E.; Elbaz, A.M.; Aideed, H.; Gulzar, Z. Digital Transformation in Omani Higher Education: Assessing Student Adoption of Video Communication during the COVID-19 Pandemic. Informatics 2024, 11, 21. [Google Scholar] [CrossRef]
  86. Singhal, R.; Kumar, A.; Singh, H.; Fuller, S.; Gill, S.S. Digital device-based active learning approach using virtual community classroom during the COVID-19 pandemic. Comput. Appl. Eng. Educ. 2021, 29, 1007–1033. [Google Scholar] [CrossRef]
  87. Pagano, A.; Marengo, A. Training time optimization through adaptive learning strategy. In Proceedings of the 2021 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), Zallaq, Bahrain, 29–30 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 563–567. [Google Scholar]
  88. Marengo, A.; Pagano, A.; Barbone, A. Data mining methods to assess student behavior in adaptive e-learning processes. In Proceedings of the 2013 Fourth International Conference on E-Learning “Best Practices in Management, Design and Development of e-Courses: Standards of Excellence and Creativity”, Manama, Bahrain, 7–9 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 303–309. [Google Scholar]
  89. Sumak, B.; Sorgo, A. The acceptance and use of interactive whiteboards among teachers: Differences in UTAUT determinants between pre- and post-adopters. Comput. Hum. Behav. 2016, 64, 602–620. [Google Scholar] [CrossRef]
  90. Sok Foon, Y.; Chan Yin Fah, B. Internet Banking Adoption in Kuala Lumpur: An Application of UTAUT Model. Int. J. Bus. Manag. 2011, 6, 161. [Google Scholar] [CrossRef]
  91. Ain, N.U.; Kaur, K.; Waheed, M. The influence of learning value on learning management system use: An extension of UTAUT2. Inf. Dev. 2016, 32, 1306–1321. [Google Scholar] [CrossRef]
  92. Tan, E.; Teo, D. Appsolutely Smartphones: Usage and Perception of Apps for Educational Purposes. Asian J. Scholarsh. Teach. Learn. 2015, 5, 55–75. [Google Scholar]
Figure 1. Research iterations.
Figure 1. Research iterations.
Informatics 11 00038 g001
Figure 2. The Unified Theory of Acceptance and Use of Technology [27].
Figure 2. The Unified Theory of Acceptance and Use of Technology [27].
Informatics 11 00038 g002
Figure 3. Extended Unified Theory of Acceptance and Use of Technology [26].
Figure 3. Extended Unified Theory of Acceptance and Use of Technology [26].
Informatics 11 00038 g003
Figure 4. Persona3D template for university students, adapted from [32].
Figure 4. Persona3D template for university students, adapted from [32].
Informatics 11 00038 g004
Figure 5. Persona3D model [33].
Figure 5. Persona3D model [33].
Informatics 11 00038 g005
Figure 6. The proposed conceptual model—the Extended UTAUT2 model [33].
Figure 6. The proposed conceptual model—the Extended UTAUT2 model [33].
Informatics 11 00038 g006
Figure 7. Result of consistent PLS algorithm.
Figure 7. Result of consistent PLS algorithm.
Informatics 11 00038 g007
Figure 8. Bootstrapping result from SmartPLS3 [2].
Figure 8. Bootstrapping result from SmartPLS3 [2].
Informatics 11 00038 g008
Table 1. Cronbach’s Alpha, inter-item correlation, and item-to-total correlation for the pilot study.
Table 1. Cronbach’s Alpha, inter-item correlation, and item-to-total correlation for the pilot study.
FactorItemsCronbach AlphaInter-Item CorrelationItem-to-Total Correlation
PE40.9300.719–0.8210.822–0.868
EE40.9210.630–0.7630.749–0.821
SI30.9560.849–0.9340.870–0.935
FC40.8540.458–0.9000.528–0.809
HM30.9520.863–0.8800.891–0.903
PV30.9200.760–0.8550.790–0.862
HB30.8420.504–0.8940.563–0.842
BI30.8980.623–0.8270.739–0.896
USE90.9330.197–0.9700.553–0.948
Table 2. KMO and Bartlett results.
Table 2. KMO and Bartlett results.
KMO and Bartlett’s Test
Kaiser–Meyer–Olkin Measure of Sampling Adequacy0.924
Table 3. Adjusted Cronbach’s Alpha, composite reliability, and average variance extracted.
Table 3. Adjusted Cronbach’s Alpha, composite reliability, and average variance extracted.
Cronbach’s Alpharho_AComposite ReliabilityAverage Variance Extracted (AVE)R2
PE0.9340.9340.9340.780.917
EE0.8760.9080.8720.64
SI0.9510.9750.9520.872
FC0.8280.8460.8060.523
HM0.9340.9340.9340.826
HT0.9370.9390.9360.83
BI0.9370.9390.9380.834
USE0.8910.8640.6960.2840.114
Table 4. Initial outer loading.
Table 4. Initial outer loading.
BIEEFCHMHTPESIUSE
BI10.873
BI20.941
BI30.924
EE1 0.543
EE2 0.968
EE3 0.719
EE4 0.902
FC1 0.643
FC2 0.461
FC3 0.833
FC4 0.879
HM1 0.904
HM2 0.903
HM3 0.918
HT1 0.974
HT2 0.879
HT3 0.876
PE2 0.888
PE3 0.848
PE4 0.899
PE1 0.896
SI1 0.74
SI2 1.02
SI3 1.013
USE1 0.513
USE2 0.044
USE3 0.149
USE4 0.535
USE5 0.681
USE6 0.016
USE7 0.251
USE8 0.706
USE9 0.976
Table 5. Item loading.
Table 5. Item loading.
BIEEFCHMHTPESIUSE
BI0.913
EE0.5170.8
FC0.5220.8310.723
HM0.720.630.6650.909
HT0.9130.3850.4520.6160.911
PE0.9170.5250.5650.740.8530.883
SI0.2580.0890.0340.210.1930.2410.934
USE0.3410.3510.3220.3590.3230.3440.3130.612
Table 6. Results for each hypothesis, path coefficient (B), T-value, significance (p-value) and hypothesis support.
Table 6. Results for each hypothesis, path coefficient (B), T-value, significance (p-value) and hypothesis support.
RelationshipOriginal Sample (O)Sample Mean (M)Standard Deviation (STDEV)T Statistics (|O/STDEV|)p-ValuesSupported: YES/NO
HT->BI0.5100.5080.0657.8810.000Yes
BI->USE0.3410.3450.0516.7030.000Yes
PE->BI0.3950.3970.0804.9150.000Yes
EE->BI0.1560.1560.0662.3630.018Yes
FC->BI−0.122−0.1210.0681.8070.071No
HM->BI0.0900.0890.0511.7400.082No
SI->BI0.0360.0360.0211.7190.086No
Table 7. Result of Multi-Group Analysis—age moderator.
Table 7. Result of Multi-Group Analysis—age moderator.
Relationshipt-Values (HA)t-Values (LA)p-Values (HA)p-Values (LA)Path Coefficients-Diff
(|LA − HA|)
p-Value (LA vs. HA)Supported YES/NO
BI->USE5.9885.3170.0000.0000.1510.959Yes
EE->BI2.1011.9600.0360.0500.0010.497No
FC->BI0.8560.7080.3920.4790.0190.395No
HM->BI0.4242.8000.6720.0050.1070.105No
HT->BI5.7889.4610.0000.0000.0530.278No
PE->BI5.7215.1740.0000.0000.1630.959Yes
SI->BI1.1151.8270.2650.0680.0220.307No
HA refers to high age, LA refers to low age.
Table 8. Result of Multi-Group Analysis—gender moderator.
Table 8. Result of Multi-Group Analysis—gender moderator.
Relationshipt-Values (F)t-Values (M)p-Values (F)p-Values (M)Path Coefficients-Diff (|M − F|)p-Value
(M vs. F)
Supported: YES/NO
BI->SE4.9055.2170.0000.0000.0650.766No
EE->BI0.0072.7300.9940.0060.1540.022Yes
FC->BI0.0011.3530.9990.1760.0670.818No
HM->BI2.7891.9180.0050.0550.0010.508No
HT->BI7.6368.3840.0000.0000.1940.978Yes
PE->BI3.5815.9720.0000.0000.0770.224No
SI->-BI0.3151.3620.7530.1730.0580.125No
F refers to female and M to male.
Table 9. Result of Multi-Group Analysis—experience moderator.
Table 9. Result of Multi-Group Analysis—experience moderator.
Relationshipt-Values (E)t-Values (LNE)p-Values (E)p-Values (LNE)Path Coefficients-Diff
(|LNE Experienced|)
p-Value
(LNE vs. E)
Supported: YES/NO
BI->USE4.3003.2000.0000.0010.1000.950Yes
EE->BI1.5001.8000.0000.0760.0000.400No
FC->BI0.1001.3001.0000.2040.1000.800No
HM->BI0.9002.9000.0000.0030.1000.300No
HT->BI6.2008.5000.0000.0000.1000.100No
PE->BI5.4003.8000.0000.0000.1000.800No
SI->BI2.0001.3000.0000.2020.1000.950Yes
E refers to experience, LNE refers to low and no experience.
Table 10. Result of Multi-Group Analysis—attendance moderator.
Table 10. Result of Multi-Group Analysis—attendance moderator.
Relationshipt-Values (HA)t-Values (LA)p-Values (HA)p-Values (LA)Path Coefficients
diff (|LA − HA|)
p-Values
(LA vs. HA)
Supported: YES/NO
B1->USE4.1955.1680.0000.0000.1000.048Yes
EE->BI2.6581.0290.0080.3000.0000.688No
FC->BI0.6701.4270.5030.1500.1000.804No
HM->BI2.4560.9790.0140.3300.1000.731No
HT->BI9.0577.2930.0000.0000.0000.433No
PE->BI5.6875.6890.0000.0000.1000.136No
SI->BI2.4310.5680.0150.5700.0000.718No
Table 11. Result of Multi-Group Analysis—engagement with VLEs moderator.
Table 11. Result of Multi-Group Analysis—engagement with VLEs moderator.
Relationshipt-Value (H_VLE)t-Value (L_VLE)p-Value
(H_VLE)
p-Value (L_VLE)Path Coefficients-Diff (|L_VLE_ − H_VLE_|)p-Value (L_VLE_ vs. H_VLE)Supported: YES/NO
BI->SE6.5300.7940.0000.4270.0750.405No
EE->BI2.8600.8650.0000.3870.0120.466No
FC->BI0.8201.2710.4100.2040.1590.964Yes
HM->BI2.0202.0290.0400.0420.1620.103No
HT->BI9.7206.5820.0000.0000.0480.288No
PE->BI7.1903.0430.0000.0020.0790.749No
SI->BI1.7801.6640.0800.0960.0870.124No
H_VLE refers to high VLE and L_VLE refers to low VLE.
Table 12. Result of Multi-Group Analysis—educational level moderator.
Table 12. Result of Multi-Group Analysis—educational level moderator.
Relationshipt-Value (L3&4)t-Values (L1&2)p-Values (L3&4)p-Values (L1&2)Path Coefficients-Diff
(|L1&2 − L3&4|)
p-value (L1&2 vs. L3&4)Supported: YES/NO
BI->USE4.0805.3700.0000.0000.1300.870No
EE->BI1.6601.8500.1000.0700.0500.710No
FC->BI0.4500.9900.6500.3200.0900.810No
HM->BI0.0901.9900.9300.0500.1100.110No
HT->BI7.2508.1100.0000.0000.0800.810No
PE->BI3.8905.7500.0000.0000.0400.360No
SI->BI1.7001.5500.0900.1200.0100.550No
L3&4 refers to level 3 and 4 students and L1&2 refers to level 1 and 2 students.
Table 13. Result of Multi-Group Analysis—grade moderator.
Table 13. Result of Multi-Group Analysis—grade moderator.
Relationshipt-Values (HG)t-Values (LG)p-Values (HG)p-Values (LG)Path Coefficients-Diff (|LG − HG|)p-Value (LG vs. HG)Supported: YES/NO
BI->USE3.5571.1400.0000.2540.1100.216No
EE->BI1.2602.7340.2080.0060.0900.158No
FC->BI1.0450.3630.2960.7170.0400.328No
HM->BI2.4431.9100.0150.0560.0000.521No
HT->BI9.8403.4660.0000.0010.1100.816No
PE->BI4.7852.8950.0000.0040.0600.336No
SI->BI0.6371.5390.5240.1240.0700.111No
HG refers to high grade and LG refers to low grade.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Amer jid Almahri, F.A.; Bell, D.; Gulzar, Z. Chatbot Technology Use and Acceptance Using Educational Personas. Informatics 2024, 11, 38. https://doi.org/10.3390/informatics11020038

AMA Style

Amer jid Almahri FA, Bell D, Gulzar Z. Chatbot Technology Use and Acceptance Using Educational Personas. Informatics. 2024; 11(2):38. https://doi.org/10.3390/informatics11020038

Chicago/Turabian Style

Amer jid Almahri, Fatima Ali, David Bell, and Zameer Gulzar. 2024. "Chatbot Technology Use and Acceptance Using Educational Personas" Informatics 11, no. 2: 38. https://doi.org/10.3390/informatics11020038

APA Style

Amer jid Almahri, F. A., Bell, D., & Gulzar, Z. (2024). Chatbot Technology Use and Acceptance Using Educational Personas. Informatics, 11(2), 38. https://doi.org/10.3390/informatics11020038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop