Next Article in Journal
Myths of Early Math
Previous Article in Journal
The Impact of Job Satisfaction on the Turnover Intent of Executive Level Central Office Administrators in Texas Public School Districts: A Quantitative Study of Work Related Constructs
Previous Article in Special Issue
Lessons Learned from the Dying2Learn MOOC: Pedagogy, Platforms and Partnerships
Article Menu

Export Article

Educ. Sci. 2018, 8(2), 70; doi:10.3390/educsci8020070

Article
Factors Affecting MOOC Usage by Students in Selected Ghanaian Universities
1
College of Law and Management Studies, University of Kwazulu Natal Durban, Durban 4041, South Africa
2
Department of Management, Ghana Technology University College, Accra PMB 100, Ghana
3
Department of Computer Science, Ho Technical University, Ho HP217, Ghana
*
Author to whom correspondence should be addressed.
Received: 18 March 2018 / Accepted: 26 April 2018 / Published: 16 May 2018

Abstract

:
There has been widespread criticism about the rates of participation of students enrolled on MOOCs (Massive Open Online Courses), more importantly, the percentage of students who actively consume course materials from beginning to the end. The current study sought to investigate this trend by examining the factors that influence MOOC adoption and use by students in selected Ghanaian universities. The Unified Theory of Acceptance and Use of Technology (UTAUT) was extended to develop a research model. A survey was conducted with 270 questionnaires administered to students who had been assigned MOOCs; 204 questionnaires were retrieved for analysis. Findings of the study show that MOOC usage intention is influenced by computer self-efficacy, performance expectancy, and system quality. Results also showed that MOOC usage is influenced by facilitating conditions, instructional quality, and MOOC usage intention. Social influence and effort expectancy were found not to have a significant influence on MOOC usage intention. The authors conclude that universities must have structures and resources in place to promote the use of MOOCs by students. Computer skills training should also be part of the educational curriculum at all levels. MOOC designers must ensure that the MOOCs have good instructional quality by using the right pedagogical approaches and also ensure that the sites and learning materials are of good quality.
Keywords:
e-learning; technology adoption; MOOC; UTAUT

1. Introduction

A Massive Open Online Course (MOOC) is usually a free course that is open (globally) to anyone who wishes to enroll, has video and text-based instructional content that can be downloaded, quizzes, assignments, and forums made available through an online platform. MOOCs are accessible to people who plan to take a course or to be schooled in a particular subject area [1]. MOOCs vary from most online courses due to the fact that MOOCs are normally time-based such that cohorts of students complete the course over the same time period.
Two pioneers of online learning in Canada, George Siemens and Stephen Downes created and taught the first course that could be classified as a MOOC [1]. In 2008, Siemens and Downes taught the course “Connectivism and Connected Knowledge (CCK08)” to 25 students at the University of Manitoba. ‘Connectivism’ is an educational theory proposed by George Siemens that highlights the importance of connections between people and knowledge [2]. CCK08 was a traditional fee-paying course, but Siemens and Downes made a decision to open up access to the course to anyone who desired to join it online [2]. About 2200 additional students joined the course online.
Between 2008 and 2012, MOOCs did not catch the attention of mainstream media, nor find much adoption amongst educational institutions [3]. The course credited to have been the catalyst for the explosion of the term “MOOCs” was “CS 271: Introduction to Artificial Intelligence”. This course was taught in 2012 by Sebastian Thrun, a professor at Stanford University, and Peter Norvig, the Director of Research at Google [4]. CS 271 was a regular course offered at Stanford University which students took for credits, but Thrun and Norvig used a learning management system (LMS) to host short videos, quizzes, tests and discussion boards for people who wanted to have access to the course materials. Anyone interested in the course eventually had access to the content; both students and non-students [5]. Participants in the course were not required to create and maintain active student-student and student-tutor connections [6]. By the time the course ended, there were 30 regular students and about 160,000 online participants [7]. CS 271 created a lot of popularity for MOOCs, which resulted in the creation of a range of MOOC platforms such as Coursera, Udacity, and edX [8].
CCK08 and CS 271 were courses that influenced the categorization of MOOCs into two major types; the cMOOC and the xMOOC respectively [1]. The cMOOC (CCK08 course) was created based on the learning theory of Connectivism, a concept that has principles developed by George Siemens [9]. The Connectivist theory proposes that people learn through active connections with others, and through interactions, knowledge is transferred and learning is achieved. There is no tutor as such; the course instructor only serves as a moderator who ensures that the course meets the objectives for which it was designed [9]. cMOOCs can be considered as extensions of Personal Learning Environments (PLEs) and Personal Learning Networks (PLNs) [10].
The xMOOC (CS 271) falls into the cognitive-behaviorist pedagogical classification; first and foremost, it is an existing course, made available online [11]. It is similar to the traditional pedagogical model; it involves the use of an instructor(s) who transmits knowledge via downloadable videos and text files. In addition to these, the course contains quizzes, assignments, and online group forums. The popular MOOC platforms such as Coursera and edX offer xMOOCs, making them more popular than the cMOOCs [12].
Despite the huge growth in the number of MOOCs and a proportionate growth in enrolment numbers, participation in the MOOCs after enrolment, as well as completion of the courses has been widely criticized [2]. The leading MOOC platforms such as Coursera and EdX have typical completion rates of less than 13 percent of those who registered for the course prior to their start [13]. Several MOOCs have completion rates as low as 4 percent or 5 percent [13]. For example, the University of Edinburg ran six MOOCs in 2013 with over 600,000 students registering for the courses. After the courses had ended, Statements of Accomplishment (which show that the learner has seen most or all of the MOOC content) were give out to only 30,000 students, which represented 12 percent of the total number of students who registered [2]. In another instance, Duke University offered the Bioelectricity MOOC which had 12,725 students registering. Out of those that registered, only 7761 students watched a video and only 3658 of them took at least one quiz, while 313 passed the course with a certificate, a dropout rate of about 97 percent [13]. A study conducted by Zhang and Yuan [14] on 79,186 users and 39 courses on the Chinese XuetangX MOOC platform showed a dropout rate of between 66.09 and 92.93%. Between 2012 and 2015, more than 25 million people worldwide enrolled on MOOCs; however reports indicate that only a small percentage, approximately 10%, of these millions completed the courses [15].
Generally, MOOC completion can be described as performing all the necessary activities that could earn a student a certificate [2], however, MOOC completion remains a controversial topic. Some authors have argued that people enroll on MOOCs for various reasons, not primarily to complete the MOOC so completion rates cannot be measured only by the performance of all tasks that will enable a student earn a certificate. For instance, Porter [2] argues that a second look must be taken at the issue of MOOC completion by investigating the expectations and priorities of the target group of participants before one begins to analyze completion figures [2]. Zheng et al. [16] also argue that there are four general types of student motivations for joining MOOCs: fulfilling current needs, preparing for the future, satisfying curiosity, and connecting with people, hence course completion may not be the ultimate aim for joining a MOOC [16].
Some authors have also argued that student intrinsic motivations alone should not be seen as the most important factor that influences MOOC usage and subsequent completion, but course design is also an important factor that influences completion. For instance, Porter [2] states that the design of the learning environment in online educational systems is important in influencing usage; it entails the design of the learning platform as well as the form of support and encouragement given to the learner [2]. Encouragement to the learner could be in the form of weekly emails reminding them of their progress in the course and also to emphasize progress made by their peers [2]. Cole [17] also argues that in order to improve learner retention, online learning materials should be designed properly [17].
The MOOC is a form of online learning. Online learning offers many benefits and opportunities to both tutors and learners. In order to harness the opportunities offered by online educational systems, some lecturers in selected universities in Ghana have tried to engage students to use MOOCs to improve their learning. However, as with international trends, the implementation of the MOOCs has seen poor participation or usage. As the literature suggests, poor participation in the MOOCs after enrolment, as well as low completion rates are a source of concern.
A review of the literature revealed that little research has been done in the area of MOOC usage; for instance, there is little exploratory work on intention to use MOOCs and actual MOOC usage. This study intends to address the paucity of research in the area of MOOC adoption and usage, and specifically probe the peculiar factors that account for this in universities in Ghana. There is very little research on MOOC usage in African countries, hence this study will give academics a platform to do a cross-cultural evaluation of the application of certain theories and models in research.

1.1. Literature Review

The aim of the study was to identify the factors that influence the adoption and use of MOOCs as an online educational technology to support student learning. Technology adoption is therefore an essential aspect of the framework of the study. A review of the literature indicated that there are several models/theories of technology adoption, such as the Theory of Reasoned Action (TRA), the Theory of Planned Behavior (TPB), Social Cognitive Theory (SCT), the Technology Acceptance Model (TAM), the Extended Technology Acceptance Model (TAM2), and the Unified Theory of Acceptance and Use of Technology (UTAUT). The current study adapted the UTAUT for the theoretical framework; the reason for this choice is explained in the next section.
The Unified Theory of Acceptance and Use of Technology (UTAUT) model was developed by Venkatesh, Morris, Davis and Davis in 2003 [18] to address the limitations of the Technology Acceptance Model (TAM) and other popular models used in the study of information systems adoption. Their goal was to create a unified model by merging different models of technology adoption as well as results from several researchers in the area of technology adoption [19]. Venkatesh et al. [18] identified and studied eight previously established models;
  • Theory of Reasoned Action (TRA)—Ajzen & Fishbein (1980)
  • Technology Acceptance Model (TAM)—Davis (1989)
  • Motivational model (MM)—Davis et al. (1992)
  • Theory of Planned Behaviour (TPB)—Ajzen (1985)
  • C-TAM-TPB—a model combining TAM and the Theory of Planned Behaviour (TPB)—Taylor and Todd (1995)
  • Model of Personal Computer Utilization (MPCU)—Thompson et al. (1991)
  • Innovation Diffusion Theory (IDT)—Rogers (1983 and 2003)
  • Social Cognitive Theory (SCT)—Compeau & Higgins (1995) [19]
Venkatesh et al. [18] used longitudinal data from four organizations over a 6-month period to empirically compare the eight models in both voluntary and mandatory settings. At the end of the study, the resultant model was the Unified Theory of Acceptance and Use of Technology (UTAUT) model. UTAUT posits that four constructs play a significant role in determining user acceptance and user behavior, namely, performance expectancy, effort expectancy, social influence, and facilitating conditions [18]. The researchers also discovered certain moderating factors that had been found to significantly influence the resolve to use technology. Venkatesh et al. [18] explored the effect of these moderating variables on technology use. These moderating factors are gender, age, experience and voluntariness [18].
According to Venkatesh et al. [18] the variance in intention to use explained by the contributing models ranged from 17 to 53%. The UTAUT model was found to perform better in terms of variance in intention to use compared to any of the other eight models [18]. The researchers also found that the moderating factors significantly improved the predictive power of the rest of the eight models, except the Motivational Model and the Social Cognitive Model. The UTAUT is one of the most powerful and thorough theories for explaining IT adoption; largely because of the integration of as many as eight theories [20].
Once an appropriate model for technology adoption and use had been identified for the study, it was critical to find out how extensively this model had been used in the study of technology use in e-learning settings. UTAUT was originally created to assess technology adoption in organizational/corporate settings, but a review of the literature showed that UTAUT has been applied in the context of e-learning, as it has been beneficial in understanding technology use in various e-learning settings. The next section will discuss studies in the area of e-learning that involved the use of the UTAUT as a technology adoption model.
Dečman [21] conducted a study to determine the impact of the UTAUT variables on the intention to use an e-learning system in a mandatory setting. The study used factor analysis and structural equation modeling for data validation. UTAUT was shown to be an appropriate model to examine technology adoption in an e-learning setting. Social influence and performance expectancy were shown to significantly influence usage intention of the technology. Results also showed no significant influence of students’ previous education or gender on the model fit [21].
A study conducted by Wang et al. [22] on mobile learning (m-learning) usage showed that performance expectancy, effort expectancy, social influence, perceived playfulness, and self-management of learning were all significant determinants of behavioral intention to use m-learning. The study used a framework based on the UTAUT by adding the perceived playfulness and self-management of m-learning constructs. A survey was conducted and data analysis was done using structural equation modeling [22].
Using UTAUT as the theoretical framework, Pynoo et al. [23] studied secondary school teachers’ acceptance of a digital learning environment (DLE). The study revealed that the main predictors of DLE acceptance were performance expectancy and social influence by superiors to use the DLE, while effort expectancy and facilitating conditions were not significant [23].
Juinn & Tan [24] conducted a study in Taiwan using UTAUT as a theoretical lens to investigate Taiwanese college students’ acceptance of English e-learning websites. They found that effort expectancy had a positive effect on behavioral intentions. Their study also revealed that facilitating conditions had a direct effect on the use of English e-learning websites [24]. In the area of open access educational content, Dulle [25] conducted a study in Tanzanian universities using UTAUT. The study revealed that effort expectancy is a key determinant of researchers’ behavioral intention to use open-access educational content. Dulle [25] also found that facilitating conditions significantly affect researchers’ actual usage of open access educational content [25].
While UTAUT focuses on technology adoption in general, when considering the adoption and usage of e-learning systems such as MOOCs, additional factors need to be taken into account. Further review of the literature showed that several other factors have been found to influence the adoption and use of e-learning systems. As several researchers have extended UTAUT to study technology adoption in the context of e-learning, this study will follow suit and extend UTAUT with relevant variables found to be significant in e-learning adoption and use.
The following variables will be used to extend UTAUT since they have been found to be significant in influencing e-learning adoption and use from several studies; instructional quality, computer self-efficacy, and service quality. Instructional quality refers to students’ opinions of the effectiveness of teaching methods and the total quality of the course design. Instructional quality includes the student’s views on the skills of the instructor as well as the quality of information provided. Instructor characteristics and teaching materials have a positive effect on perceived usefulness, which in turn has a positive effect on intention to use e-learning. Design of learning content has a positive effect on perceived ease of use, which in turn has a positive effect on intention to use e-learning [26]. Instructional quality is a significant positive predictor of students’ satisfaction in an online course [27]. E-learning effectiveness in a Blackboard e-learning system was found to be influenced by the quality of multimedia instruction [28].
System quality refers to the perceived overall quality of the e-learning system. It is derived from the Information Systems Success Model [29]. System quality influences system usage intention and subsequent system usage [29]. Users’ continuance intention of online learning platforms is determined by satisfaction, which is in turn determined by perceived system quality [30]. System quality is positively related to intention to continue usage of online learning systems [31]. Perceived system quality influences students’ behavioral intentions to use online learning course websites [32].
Self-efficacy can be described as a person’s subjective judgment of his or her skill level to execute certain behaviors or obtain certain results in the future. Computer self-efficacy has a significant effect on behavioral intention to use e-learning [33]. Self-efficacy is significantly positively related to students’ overall satisfaction with a self-paced, online course [27]. Computer self-efficacy affects students’ behavioral intentions to use online learning course websites [32].
From the review of the literature, the following variables from UTAUT were used for the construction of the research model; performance expectancy, effort expectancy, social influence, facilitating conditions, behavioral intention, and usage behavior. The following additional variables will be used to extend UTAUT; instructional quality, computer self-efficacy, and service quality.

1.2. Research Model and Hypothesis Development

Current research has shown that performance expectancy, effort expectancy, social influence and facilitating conditions have significant effects on intention to use e-learning systems [21,22,23,24,25]. Additionally we argue, based on the research by Artino [27], Delone and McLean [29], Ramayah et al. [31] and Chan and Tung [32] that computer self-efficacy, service quality, and instructional quality also have a significant effect on the use of e-learning systems.
The hypotheses are stated in the following sections:
Performance Expectancy: Performance expectancy describes an individual’s perceptions about the potential of a particular technology to perform different activities [15]. UTAUT proposed that performance expectancy has a direct influence on behavioral intention to use a particular technology.
In line with theory and the findings from Dečman [21], Wang et al. [23], and Pynoo et al. [23], we posit that:
Hypothesis 1.
Performance expectancy has a significant effect on students’ intentions to use MOOCs.
Social Influence: Venkatesh et al. [18] defined social influence as the extent to which one senses how significant it is that ‘‘other people’’ believe he or she should use a technology [15]. Social influence has a direct effect on behavioral intention to use a particular technology, and is moderated by gender, age, experience, and voluntariness of use [15].
In line with theory and the findings from Dečman [21], Wang et al. [23], Pynoo et al. [23], we posit that:
Hypothesis 2.
Social influence has a significant effect on students’ intentions to use MOOCs.
Effort Expectancy: Effort expectancy is defined as the level of ease related to the use of technology [15]. UTAUT proposes that effort expectancy has a direct influence on behavioral intention to use a particular technology; moderated by gender, age and experience [15].
In line with theory and the findings from the works of Dečman [21], Wang et al. [23], Pynoo et al. [23], Juinn & Tan [24], and Dulle [25], we posit that:
Hypothesis 3.
Effort expectancy has a significant effect on students’ intentions to use MOOCs.
Facilitating Conditions: Facilitating Conditions is the extent to which users are convinced that essential organizational and technical infrastructures promote the use of technology [15]. According to Venkatesh et al. [18], facilitating conditions influence usage behavior directly, and are moderated by age and experience [15].
In line theory and the findings from the works of Dečman [21], Wang et al. [23], Pynoo et al. [23], Juinn & Tan [24], and Dulle [25], we posit that:
Hypothesis 4.
Facilitating conditions has a significant effect on students’ MOOC usage behavior.
Computer Self-efficacy: Self-efficacy can be defined as a subjective judgment of a person’s skill level to execute certain behaviors or obtain certain results in the future. Computer self-efficacy therefore refers to self-efficacy with regards to the use of computers. Studies by Artino [27] and Chang and Tung [32] showed that computer self-efficacy significantly influenced e-learning system usage. Artino [27] adopted and expanded TAM while Chang and Tung [32] combined the innovation diffusion theory (IDT) and the technology acceptance model (TAM) in their studies [24,29]. Malek and Karim [34] conducted a study in Saudi Arabian government universities and found that computer self-efficacy significantly influences students’ intention to use e-learning [34].
In line with the findings above we posit that:
Hypothesis 5.
Computer self-efficacy has a significant effect on students’ intentions to use MOOCs.
System quality: System Quality refers to the perceived overall quality of the e-learning system. This construct is derived from the Information Systems Success Model [35]. System quality influences system usage intention and subsequent system usage [26]. Chiu et al. [30] conducted research on students in a Taiwanese university and found that user continuance intention is determined by satisfaction, which is in turn determined by perceived system quality [27]. Ramayah et al. [31] explored quality factors in intention to continue using an e-learning system in Malaysia. Their study showed that service quality, information quality and system quality were positively related to intention to continue usage [36]. Chang and Tung [32] also state that perceived system quality influences students’ behavioral intention to use online learning course websites [29].
In line with theory and the findings above we posit that:
Hypothesis 6.
System quality has a significant effect on MOOC usage intention.
Instructional quality: Instructional Quality refers to students’ views of the effectiveness of teaching methods and the overall quality of the course design. Instructional quality includes the student’s perceptions of the skills of the instructor as well as the quality of information delivered.
Instructor characteristics and quality of teaching materials were found by Lee et al. [26] to have a positive influence on perceived usefulness, which in turn had a positive influence on intention to use e-learning. Lee et al. [26] surveyed 250 undergraduate students in a university in South Korea for the study. Design of learning content had a positive effect on perceived ease of use, which in turn had a positive effect on intention to use e-learning [26].
Artino [27] states that instructional quality is a significant positive predictor of students’ satisfaction. Liaw [28] conducted a study on students’ usage of a Blackboard e-learning system at the University of Central Taiwan. His findings showed that e-learning effectiveness can be influenced by the quality of multimedia instruction.
In line with the findings above we posit that:
Hypothesis 7.
Instructional quality has a significant effect on students’ MOOC usage behavior.
Intention to Use and Usage Behaviour: MOOC usage intention refers to the pre-determined decision of a student to use the MOOC in the near future. Usage intention is theorized to result in usage behavior. Several theories/models have proposed a direct influence of behavioral intention on usage behavior; for instance TAM [37], TPB [38], UTAUT [18], and UTAUT2 [39].
Several studies have also confirmed the influence of behavioral intention on usage behavior; Ain et al. [40], Dečman [21], Chiu & Wang [41], Pynoo et al. [23], and Wang et al. [23],
In line with theory and the findings above we posit that:
Hypothesis 8.
Students’ intention to use MOOCs has a significant effect on students’ MOOC usage behavior.
The proposed research model is shown in Figure 1.

2. Research Method

2.1. Instrument Development

The research model included nine variables, each of which is measured with multiple items. The items were adopted from literature in order to improve content validity [42]. The items were re-phrased to reflect the context of MOOCs and the study environment. The questions were reviewed by other researchers to ensure that they were appropriate for respondents and comprehensible. Based on their feedback, the questions were revised. A pilot study was then conducted with 100 students to validate the instrument. Results from an exploratory factor analysis suggested that the instrument had good validity.

2.2. Measurement Instrument

Performance Expectancy and Social Influence were measured with four items derived from Venkatesh et al. [18]. Items for Effort Expectancy, Facilitating Conditions, and MOOC Usage Intention were measured with three items derived from Venkatesh et al. [18]. System Quality, which assesses the access speed, ease of use, reliability, system features, efficiency, ease of navigation, flexibility, privacy, and visual appeal of the MOOC was measured with four items derived from Kim et al. [43]. Computer Self-Efficacy was measured with six items adopted from the Murphy Self Efficacy Scale [44] and refined to fit the context of the MOOC. Instructional Quality was measured with four items derived from Frick et al. [45], Ozkan and Koseler [46] and Choi et al. [47] and re-worded to fit the context of the MOOC. MOOC Usage was measured with four items adopted from Pavlou [48] and structured to fit the MOOC context.

2.3. Sample and Data Collection

The field survey was conducted over a one-week period in January 2018 at Ho Technical University (HTU) in Ho, and Ghana Technology University College (GTUC) in Accra, Ghana. HTU and GTUC were chosen because some lecturers in those institutions have been assigning MOOCs to their students. The location of the institutions was also convenient for the researchers. Choosing Ho and Accra helped the researchers to use time and funds more efficiently, as well as improve the efficiency of questionnaire administration. With the aid of lecturers the researchers recruited two research assistants to help in data collection. A total of 270 questionnaires were given out to the students who had been assigned the MOOCs. Since the lecturers knew the particular classes that were involved, the students in those classes were targeted. The research assistants were responsible for retrieving the questionnaires once the respondents had completed filling them. Out of the 270 questionnaires distributed, a total of 204 were retrieved representing a response rate of 76%. The high response rate was due to the cooperation of respondents and the lecturers.

2.4. Data Analysis Method

Analysis of the collected data was done using the partial least squares approach to structural equation modeling (PLS-SEM) on Smart PLS 3. Structural equation modeling is a second generation statistical technique that enables researchers to examine causal relationships between latent variables [49]. There are two methods of SEM; the covariance-based SEM and the variance-based approach. The covariance-based SEM requires that the data show multivariate normality while the variance-based approach (PLS-SEM) does not require multivariate normality [49]. The current study employs PLS-SEM because preliminary analysis showed that the data were non-normal. Table 1 shows that the data is non-normal since none of the values for both skewness and excess kurtosis is 0.

3. Results

Consistent with the two-step approach for assessing structural equation models recommended by Chin [50], we first evaluated the measurement model for reliability and validity. Next, we went on to test the structural paths between the variables in the proposed model. The Smart PLS 3 software was used to evaluate the reliability and validity of the measurement model, as well as test the structural model.
Measurement model assessment: The measurement model was assessed based on reliability, discriminant validity and convergent validity. Reliability of constructs was assessed using Cronbach’s α. From Table 1, it can be seen that Cronbach’s α values are higher than the threshold of 0.7 [51]. It can therefore be concluded that the measurement model exhibits good reliability.
Convergent validity of the measurement model was also evaluated based on recommendations by Henseler et al. [52] that the average variance extracted (AVE) for each latent construct should be greater than 0.5 [53]. From Table 2 it can be seen that AVE values for all constructs are higher than the 0.5 threshold. We can therefore conclude that the measurement model exhibits good convergent validity.
Discriminant validity was assessed using the Fornell-Larker criterion, which states that the AVE of each latent construct should be greater than the highest squared correlations between any other construct [54]. It can be seen from Table 2 that the square root of the AVEs for each construct is greater than the cross-correlation with other constructs. Based on these results, we conclude that the measurement model exhibits good discriminant validity.
Structural Model Assessment: After confirming the validity and reliability of the measurement model, the structural model was assessed. The structural model was assessed based on the sign, magnitude and significance of path coefficients of each hypothesized path. Bootstrapping was performed to determine the significance of each estimated path. To determine the explanatory power of the structural model, its ability to predict the endogenous constructs was assessed using the coefficient of determination R2. Results for the structural model assessment are presented in Table 3.
Performance Expectancy was found to have a significant positive effect on MOOC Usage Intention (β = 0.318, p = 0.000), which provides support for H1. However, Social Influence was found not to have a significant effect on MOOC Usage Intention (β = 0.078, p = 0.129), which does not provide support for H2. Contrary to expectation, Effort Expectancy was found not to have a significant effect on MOOC Usage Intention (β = 0.084, p = 0.103), which does not provide support for H3. Facilitating Conditions was found to have a significant positive effect on MOOC usage (β = 0.378, p = 0.000), which provides support for H4. Computer Self-Efficacy was found to have a significant positive effect on MOOC Usage Intention (β = 0.103, p = 0.026), which provides support for H5. System Quality was also found to have a significant positive effect on MOOC Usage Intention (β = 0.318, p = 0.000), which provides support for H6. Instructional Quality was found to have a significant positive effect on MOOC usage (β = 0.359, p = 0.000); this provides support for H7. As expected, MOOC Usage Intention was found to have a significant positive effect on MOOC usage (β = 0.354, p = 0.000), which provides support for H8.
In all, the proposed model accounted for 75.8 percent of the variance in MOOC Usage, and 77.4 percent of the variance in MOOC Usage Intention (R2 of 0.758 and 0.774 respectively). To assess model fit in PLS we used the standardized root mean square residual (SRMR). The SRMR value for the model was 0.093; a value less than 0.08 is generally considered a good fit [55]. Although the SRMR value does not indicate a very good fit as recommended by Hu and Bentler [55], reliability, validity, and R2 measures show that the model is able to explain the relationships in the hypothesized paths.

4. Discussion

The current study sought to investigate the factors that influence MOOC usage by students in selected tertiary institutions in Ghana. The study was motivated by the fact that lecturers who assigned MOOCs to their students reported a general lack of interest and poor usage of the MOOCs. As stated earlier, participation in MOOCs after enrolment, as well as completion of the courses has been widely criticized [2].
We approached the study from a technology adoption perspective; the findings of our study show that MOOC usage intention is influenced by computer self-efficacy, performance expectancy, and system quality. The study also showed that MOOC usage is influenced by facilitating conditions, instructional quality, and MOOC usage intention.
Six out of the eight hypotheses stated were supported, which, to a large extent supports our research model. A review of the literature reveals a paucity of research in the area of MOOC adoption, usage and continuance intention as compared to e-learning in general.
Our study provides support for studies conducted by Artino [27], Chang and Tung [32], and Malek and Karim [34] which show that computer self-efficacy influences students’ usage of e-learning systems. It points to the fact that relevant skills in the usage of computing devices are important for the effective use of MOOCs. We therefore propose that lecturers must ensure that students who don’t have the necessary skills in the use of computing devices be given some training to enable them to use the MOOC and any other e-learning systems effectively. Consistent with work done by Ain et al. [40] on Learning Management System use by students in Malaysia, and Al-Shafi et al. [56] on e-government services adoption [56], the current study did not provide support for the relationship established in the UTAUT model between effort expectancy and usage intention. In their study, effort expectancy was found not to have a significant influence on Learning Management System use by students. This may be due to students placing more importance on usefulness and learning from the MOOC than the effort required to learn. Unlike traditional technology systems which are often designed to improve efficiencies and hence reduce effort, learning is seen as an activity that requires effort. Tools that support the process are not necessarily used to reduce effort but rather improve the effectiveness of the learning. Also, the study did not provide support for the relationship established in the UTAUT model between social influence and usage intention. This result is consistent with findings by Magsamen-Conrad et al. [57], who found that social influence has no significant influence on tablet use intentions. The implication is that students feel they don’t need the support of their social circle to be motivated to use the MOOC. It could also mean that students have the perception that lecturers and the university as a whole will not support them in the use of MOOCs, since the social influence construct was measured with items that considered perceived help from lecturers and support from the university. We propose that lecturers in the universities should encourage students to use MOOCs and assure them of their willingness to help, and also check on their progress. Universities should provide students with the needed resources such as good internet connectivity and computer labs to encourage students to participate in MOOCs.
As was the case with studies conducted by Dečman [21], Wang et al. [22], Pynoo et al. [23], and Juinn and Tan [24], facilitating conditions was found to have a significant influence on MOOC usage. This Universities must therefore provide the necessary structures and resources that will promote the use of MOOCs by students. Lecturers should incorporate the use of MOOCs into their teaching and if possible. Universities must also provide controlled internet access to students and set up libraries or laboratories where students can have access to computers for the purposes of e-learning.Consistent with Lee et al. [23], the study showed that instructional quality has a significant relationship with MOOC usage. MOOC providers must therefore strive to provide a platform that provides personalized feedback, address individual differences, motivates students while avoiding information overload. The MOOC platform must also create real-life context, encourage social interaction, provide hands-on activities, and encourage more student reflection [58]. This means the right pedagogical approach must be adopted in designing the instructional materials. MOOCs tend to be based on the cognitive—social-behaviorist pedagogical approach [59,60]. As a result, as recommended by Alzaghoul [61], the following need to considered in designing instructional content; learners need to be informed of learning outcomes of the online course, the online lesson must have tests at particular sections to check the learner’s level of understanding of the material, the learning materials must be sequenced properly to promote learning, and finally, the learners must be given feedback so they can monitor their progress and take corrective action if need be [61]
Our study also provides support for studies conducted by Dečman [21], Wang et al. [22], Pynoo et al. [23] which show that performance expectancy has a significant effect on usage intention. This shows that students are of the view that consistent participation in the MOOC can positively impact their academic performance. We propose that lecturers and the university management as a whole find ways of motivating students to participate in MOOCs. This can be done by consistent encouragement from lecturers, pasting of MOOC information on student notice boards, formation of MOOC clubs and commissioning of MOOC champions. Lastly, system quality was found to have a significant influence on MOOC usage intention. This is consistent with work done by Chang and Tung [32] and Ramayah et al. [31]. This reinforces the fact that MOOC designers must ensure that MOOCs are of good quality. They can achieve this by ensuring the site loads quickly, is easy to use, easy to navigate, visually attractive, and that mobile access with responsive designs are implemented.

5. Conclusions

In order to improve on MOOC usage by students in Ghanaian universities, we propose that universities should promote the use of MOOCs by providing resources such as internet access and computer labs; this will encourage those who do not have computing devices and cannot afford internet services to use MOOCs. Universities should also come up with activities that will influence students to partake in MOOCs. Lecturers should also encourage and support students to use MOOCs in innovative ways. Computer skills training should also be part of the educational curriculum at all levels. MOOC designers must ensure good instructional quality by using the right pedagogical approaches; this is likely to improve student engagement with MOOCs, especially when students experience learning while using them.

Author Contributions

E.F. conceived the idea of the study. E.F. and C.B. developed the research model, survey, and measurement instrument. G.O.A.A assisted in data collection and responded to most of the comments from reviewers. K.S.O. analyzed the data and wrote the “Results” section. E.F. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baturay, M.H. An overview of the world of MOOCs. Procedia Soc. Behav. Sci. 2015, 174, 427–433. [Google Scholar] [CrossRef]
  2. Porter, S. To MOOC or Not to MOOC How Can Online Learning Help to Build the Future of Higher Education? Chandos Publishing: Waltham, MA, USA, 2015. [Google Scholar]
  3. Daniel, J. Making Sense of MOOCs: Musings in a Maze of Myth, Paradox and Possibility. J. Interact. Media Educ. 2012, 2012, 18. [Google Scholar] [CrossRef]
  4. Moe, R. The brief & expansive history (and future) of the MOOC: Why two divergent models share the same name T HE BRIEF & EXPANSIVE HISTORY (AND FUTURE) OF THE MOOC: W HY TWO. Curr. Issues Emerg. eLearn. 2015, 2, 1–24. [Google Scholar]
  5. Cheal, C. Creating MOOCs for College Credit. Educ. Cent. Anal. Res. 2013, 1–8. [Google Scholar]
  6. Vanderbilt, T. How Artificial Intelligence Can Change Higher Education. Available online: https://www.smithsonianmag.com/people-places/how-artificial-intelligence-can-change-higher-education-136983766/ (accessed on 27 January 2018).
  7. Friedman, T. Come the Revolution. 2012. Available online: http://www.nytimes.com/2012/05/16/opinion/friedman-come-the-revolution.html (accessed on 27 January 2018).
  8. Pappano, L. The Year of the MOOC-The New York Times. Available online: http//www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-are-multiplying-at-a-rapid-pace.html (accessed on 15 November 2018).
  9. Kesim, M.; Alt, H. A Theoretical Analysis of Moocs Types From A Perspective of Learning Theories. Procedia Soc. Behav. Sci. 2015, 186, 15–19. [Google Scholar] [CrossRef]
  10. Kop, R. The challenges to connectivist learning on open online networks: Learning experiences during a massive open online course. Int. Rev. Res. Open Distance Learn. 2011, 12, 1–11. [Google Scholar] [CrossRef]
  11. Rodriguez, O. MOOCs and the AI—Stanford like Courses: Two Successful and Distinct Course Formats for Massive Open Online Courses. Eur. J. Open Distance E-Learn. 2012, 2, 1–12. [Google Scholar]
  12. Pomerol, J.C.; Epelboin, Y.; Thoury, C. MOOCs: Design, Use and Business Models; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  13. Jordan, K. Initial trends in enrolment and completion of massive open online courses Massive Open Online Courses. Int. Rev. Res. Open Distance Learn. 2014, 15, 133–160. [Google Scholar]
  14. Zhang, T.; Yuan, B. Visualizing MOOC User Behaviors: A Case Study on XuetangX. In Intelligent Data Engineering and Automated Learning—IDEAL 2016; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 8206, pp. 89–98. [Google Scholar]
  15. Zhenghao, C.; Alcorn, B.; Christensen, G.; Eriksson, N.; Koller, D.; Emanuel, E. Who’s benefiting from MOOCs, and Why. Harv. Bus. Rev. 2015, 22. [Google Scholar]
  16. Zheng, S.; Rosson, M.B.; Shih, P.C.; Carroll, J.M. Understanding Student Motivation, Behaviors, and Perceptions in MOOCs. In Motivation and Dynamics of the Open Classroom; CSCW: Vancouver, BC, Canada, 2015; pp. 1882–1895. [Google Scholar]
  17. Cole, R.A. Issues in Web-Based Pedagogy: A Critical Primer; Greenwood Publishing Group: Westport, CT, USA, 2000. [Google Scholar]
  18. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  19. Bhatiasevi, V. An extended UTAUT model to explain the adoption of mobile banking. Inf. Dev. 2016, 32, 799–814. [Google Scholar] [CrossRef]
  20. Al-Qeisi, K.; Dennis, C.; Hegazy, A.; Abbad, M. How viable is the UTAUT model in a non-Western context? Int. Bus. Res. 2015, 8, 204–219. [Google Scholar] [CrossRef]
  21. Dečman, M. Modeling the acceptance of e-learning in mandatory environments of higher education: The influence of previous education and gender. Comput. Hum. Behav. 2015, 49, 272–281. [Google Scholar] [CrossRef]
  22. Wang, Y.S.; Wu, M.C.; Wang, H.Y. Investigating the determinants and age and gender differences in the acceptance of mobile learning. Br. J. Educ. Technol. 2009, 40, 92–118. [Google Scholar] [CrossRef]
  23. Pynoo, B.; Devolder, P.; Tondeur, J.; van Braak, J.; Duyck, W.; Duyck, P. Predicting secondary school teachers’ acceptance and use of a digital learning environment: A cross-sectional study. Comput. Hum. Behav. 2011, 27, 568–575. [Google Scholar] [CrossRef]
  24. Tan, P.J.B. Applying the UTAUT to Understand Factors Affecting the Use of English E-Learning Websites in Taiwan. Sage Open. 2013, 3, 1–12. [Google Scholar] [CrossRef]
  25. Dulle, F.W. The suitability of the Unified Theory of Acceptance and Use of Technology (UTAUT) model in open access adoption studies. SAGE J. 2015, 27, 32–45. [Google Scholar] [CrossRef]
  26. Lee, B.C.; Yoon, J.O.; Lee, I. Learners’ acceptance of e-learning in South Korea: Theories and results. Comput. Educ. 2009, 53, 1320–1329. [Google Scholar] [CrossRef]
  27. Artino, A.R. Motivational beliefs and perceptions of instructional quality: Predicting satisfaction with online training. J. Comput. Assist. Learn. 2008, 24, 260–270. [Google Scholar] [CrossRef]
  28. Liaw, S.S. Investigating students’ perceived satisfaction, behavioral intention, and effectiveness of e-learning: A case study of the Blackboard system. Comput. Educ. 2008, 51, 864–873. [Google Scholar] [CrossRef]
  29. Delone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  30. Chiu, C.M.; Hsu, M.H.; Sun, S.Y.; Lin, T.C.; Sun, P.C. Usability, quality, value and e-learning continuance decisions. Comput. Educ. 2005, 45, 399–416. [Google Scholar] [CrossRef]
  31. Ramayah, T.; Ahmad, N.H.; Lo, M.C. The role of quality factors in intention to continue using an e-learning system in Malaysia. Procedia Soc. Behav. Sci. 2010, 2, 5422–5426. [Google Scholar] [CrossRef]
  32. Chang, S.C.; Tung, F.C. An empirical investigation of students’ behavioural intentions to use the online learning course websites. Br. J. Educ. Technol. 2008, 39, 71–83. [Google Scholar] [CrossRef]
  33. Ong, C.; Lai, J.; Wang, Y. Factors affecting engineers’ acceptance of asynchronous e-learning systems in high-tech companies. Inf. Manag. 2004, 41, 2003–2005. [Google Scholar] [CrossRef]
  34. Malek, A.; Karim, A. An empirical investigation into the role of enjoyment, computer anxiety, computer self-efficacy and internet experience in influencing the students’ intention to use e-learning: A case study. TOJET 2010, 9, 22–34. [Google Scholar]
  35. Delone, W.H.; Mclean, E.R. Measuring e-commerce success: Applying the DeLone & McLean information systems success model. Int. J. Electron. Commer. 2004, 9, 31–47. [Google Scholar]
  36. Santhanamery, T.; Ramayah, T. Understanding the Effect of Demographic and Personality Traits on the E-Filing Continuance Usage Intention in Malaysia. Glob. Bus. Rev. 2015, 16, 1–20. [Google Scholar] [CrossRef]
  37. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  38. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  39. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar]
  40. Ain, N.; Kaur, K.; Waheed, M. The influence of learning value on learning management system use: An extension of UTAUT2. Inf. Dev. 2016, 32, 1306–1321. [Google Scholar] [CrossRef]
  41. Chiu, C.M.; Wang, E.T.G. Understanding Web-based learning continuance intention: The role of subjective task value. Inf. Manag. 2008, 45, 194–201. [Google Scholar] [CrossRef]
  42. Straub, D.; Boudreau, M.C.; Gefen, D. Validation guidelines for IS positivist research. Commun. Assoc. Inf. Syst. 2004, 13, 380–427. [Google Scholar]
  43. Bonk, C.C.J.; Kim, K.K. Future Directions of Blended Learning in Higher Education and Workplace Learning Settings. In The Handbook of Blended Learning: Global Perspectives, Local Designs; Association for the Advancement of Computing in Education (AACE): Waynesville, NC, USA, 2004; p. 27. [Google Scholar]
  44. Khorrami-Arani, O. Researching computer self-efficacy. Int. Educ. J. 2001, 2, 17–25. [Google Scholar]
  45. Frick, T.W.; Chadha, R.; Watson, C.; Wang, Y.; Green, P. College student perceptions of teaching and learning quality. Educ. Technol. Res. Dev. 2009, 57, 705–720. [Google Scholar] [CrossRef]
  46. Ozkan, S.; Koseler, R. Multi-dimensional students’ evaluation of e-learning systems in the higher education context: An empirical investigation. Comput. Educ. 2009, 53, 1285–1296. [Google Scholar] [CrossRef]
  47. Choi, D.H.; Kim, J.; Kim, S.H. ERP training with a web-based electronic learning system: The flow theory perspective. Int. J. Hum. Comput. Stud. 2007, 65, 223–243. [Google Scholar] [CrossRef]
  48. Pavlou, P.A. Consumer Acceptance of Electronic Commerce: Integrating Trust and Risk with the Technology Acceptance Model. Int. J. Electron. Commer. 2003, 7, 101–134. [Google Scholar]
  49. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 7th ed.; Pearson Education International: Essex, UK, 2014. [Google Scholar]
  50. Chin, W.W. The partial least squares approach to structural equation modeling. Mod. Methods Bus. Res. 1998, 295, 295–336. [Google Scholar]
  51. Nunnally, J.C.; Bernstein, I.H. The Assessment of Reliability. Psychom. Theory 1994, 3, 248–292. [Google Scholar]
  52. Henseler, J.; Ringle, C.M.; Sinkovics, R. The use of partial least squares path modeling in international marketing. Adv. Int. Mark. 2009, 20, 277–319. [Google Scholar]
  53. Henseler, J.; Ringle, C.M.; Sinkovics, R.R. The use of partial least squares path modeling in international marketing. In New Challenges to International Marketin; Emerald Group Publishing Limited: Bingley, UK, 2009; pp. 277–319. [Google Scholar]
  54. Fornell, C.; Larcker, D.F. Structural equation models with unobservable variables and measurement error: Algebra and statistics. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  55. Hu, L.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. A Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  56. Al-Shafi, S.; Weerakkody, V.; Janssen, M. Investigating the Adoption of eGovernment Services in Qatar Using the UTAUT Model. In Proceedings of the Fifteenth Americas Conference on Information Systems, San Francisco, CA, USA, 6–9 August 2009; p. 11. [Google Scholar]
  57. Magsamen-Conrad, K.; Upadhyaya, S.; Joa, C.Y.; Dowd, J. Bridging the divide: Using UTAUT to predict multigenerational tablet adoption practices. Comput. Hum. Behav. 2015, 50, 186–196. [Google Scholar] [CrossRef] [PubMed]
  58. Johnson, S.D.; Aragon, S.R. An instructional strategy framework for online learning environments. New Dir. Adult Contin. Educ. 2003, 2003, 31–43. [Google Scholar] [CrossRef]
  59. Bali, M. MOOC Pedagogy: Gleaning Good Practice from Existing MOOCs. MERLOT J. Online Learn. Teach. 2014, 10, 44–56. [Google Scholar]
  60. Bayne, S.; Ross, J. MOOC pedagogy. In Massive Open Online Courses: The MOOC Revolution; Routledge: Abingdon, UK, 2014; pp. 23–45. [Google Scholar]
  61. Alzaghoul, A.F. The Implication of the Learning Theories on Implementing e-learning Courses. Res. Bull. Jordan ACM 2012, 11, 27–30. [Google Scholar]
Figure 1. The proposed research model.
Figure 1. The proposed research model.
Education 08 00070 g001
Table 1. Factor Loading and Reliability Statistics.
Table 1. Factor Loading and Reliability Statistics.
ConstructMeanStandard DeviationExcess KurtosisSkewLoadingst-ValueCronbach’s α
Performance Expectancy3.1671.189−0.651−0.4680.81117.6070.809
3.0201.084−0.806−0.2480.88549.108
3.1271.049−0.8870.0750.86541.953
3.5001.215−0.410−0.5370.5967.800
Effort Expectancy3.5050.952−0.033−0.3400.77917.7430.674
3.7250.9360.930−0.9080.81224.747
3.5340.9100.342−0.5160.74216.235
Social Influence3.3821.071−0.670−0.2330.74816.6990.797
3.5291.045−0.546−0.2860.78422.726
3.6571.125−0.287−0.6050.79620.528
3.7701.0050.027−0.6030.81828.063
Facilitating Conditions3.1271.277−0.910−0.3830.86428.3750.840
3.0151.182−0.970−0.2620.87439.653
3.1571.258−0.966−0.2400.87229.494
MOOC Usage Intention3.4360.986−0.007−0.4710.77213.5440.770
3.2350.982−0.463−0.3310.85236.957
3.3140.995−0.748−0.0920.85836.998
Computer Self−Efficacy4.0931.0871.354−1.3390.84223.3210.927
4.1031.1441.066−1.3130.87930.794
4.0201.0750.441−1.0400.84023.041
4.1181.0550.289−1.0700.89353.233
3.8241.0970.049−0.8100.82732.769
4.0151.0410.420−0.9230.84539.581
System Quality3.3581.064−0.098−0.5590.78015.8140.859
3.2060.993−0.523−0.3330.83935.530
3.2701.053−0.611−0.2280.86839.466
2.9610.989−0.736−0.3190.86326.679
Instructional Quality3.0931.255−0.906−0.3570.81522.8340.875
3.0291.216−0.956−0.2210.86241.127
3.1271.218−0.933−0.1970.84626.185
2.7301.121−1.140−0.2100.88635.019
MOOC Usage3.2891.146−0.427−0.5260.82121.5510.876
3.1321.083−0.694−0.3130.86449.924
3.2251.115−0.737−0.2210.84725.850
2.9311.055−0.867−0.3160.88233.794
Table 2. Testing Discriminant Validity using the Fornell-Larcker Criterion.
Table 2. Testing Discriminant Validity using the Fornell-Larcker Criterion.
Latent constructs AVECSEEEFCIQMUMUIPESISQ
Computer Self-Efficacy0.7310.855
Effort Expectancy0.6060.4520.778
Facilitating Conditions0.7570.2320.2200.870
Instructional Quality0.7270.2390.3270.5090.853
MOOC Usage0.7290.3140.3440.7220.6920.854
MOOC Usage Intention0.6860.4820.6140.4570.3980.6690.828
Performance Expectancy0.6360.3890.4730.2300.1410.3890.6910.797
Social Influence0.6190.3940.5310.2480.2270.3580.5720.4250.787
System Quality0.7030.3650.5720.2900.3070.4890.8000.5080.5360.838
Note: Square roots of average variances extracted (AVEs) shown on diagonal in italic.
Table 3. Hypothesis testing.
Table 3. Hypothesis testing.
Hypothesized PathPath Coefficient (β)T Statisticsp ValuesResults
Computer Self-Efficacy → MOOC Usage Intention0.1032.2270.026Supported
Effort Expectancy → MOOC Usage Intention0.0841.6320.103Not Supported
Facilitating Conditions → MOOC Usage0.3785.0180.000Supported
Instructional Quality → MOOC Usage0.3595.1770.000Supported
MOOC Usage Intention → MOOC Usage0.3544.3710.000Supported
Performance Expectancy → MOOC Usage Intention0.3184.4030.000Supported
Social Influence → MOOC Usage Intention0.0781.5200.129Not Supported
System Quality → MOOC Usage Intention0.5115.3110.000Supported
R2 (MOOC Usage) = 0.758
R2 (MOOC Usage Intention) = 0.774
SRMR = 0.093

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Educ. Sci. EISSN 2227-7102 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top