Construction of the Quality Evaluation Index System of MOOC Platforms Based on the User Perspective

: Massive open online courses (MOOCs) have become a mainstream form of online learning. At present, various countries are vigorously developing MOOC platforms, which provide a helpful platform for people to acquire knowledge and skills. However, the quality of each MOOC platform is different, which is a challenge for learners seeking excellent courses. Since the evaluation of MOOC quality is a multiple criteria decision-making issue, it is important to ﬁnd the major dimensions and criteria that determine the quality of platforms. This paper determines the weight of each dimension and criterion by using the best worst method (BWM). The results indicate that content accuracy has the greatest impact on MOOC quality. This paper selected ﬁve well-known domestic MOOC websites as research objects and used the VIKOR analysis method to rank the platform quality of the ﬁve chosen websites. The results show that IMOOC and Xuedong are ranked as the top two websites. This research result helps learners deepen their understanding of MOOC platforms and can serve as a reference for MOOC platforms to improve their quality. Techniques to reduce the uncertainty of expert judgments (such as rough sets, fuzzy theory, grey correlation, etc.) and models that clarify the inﬂuence relationship between criteria (DEMATEL-ANP) can be applied in future research.


Introduction
With the rapid development of internet information technology, the emergence of online education has changed traditional teaching methods, learning methods across time and space have been realized, and learning channels have become flexible and diverse.In 2008, Canadian scholars Cormier and Alexander proposed the concept of massive open online courses (MOOCs).Subsequently, excellent MOOC platforms such as Coursera, EdX, and Udacity have successively appeared in countries outside of China, while Chinese MOOC platforms such as XuetangX and Open Learning are also gradually developing.Although MOOC education is very popular, it also has many problems, such as a low completion rate of students' courses, a lack of motivation for learning, difficult self-regulation of learning, wasted course resources, and an inability to adapt to the complex and changeable online education.These problems have led to low passing rates and uneven evaluation systems for MOOCs.Cabrera and Fernández-Ferrer [1] mentioned that the teaching limitation of MOOCs was teaching evaluation activity and that it was necessary to explore many online evaluation systems to make MOOCs a new teaching and learning method in the future and to ensure the survival and development of MOOCs.We tried to establish a set of scientific, effective, and reasonable MOOC platform quality evaluation index systems that fills the research gap.
As MOOC platforms involve many factors, such as course content [2,3], course form [4], and website design [5,6], the evaluation of MOOC platform quality is a multicriteria decision-making (MCDM) problem.Tzeng et al. [7] proposed a new hybrid multiple criteria decision-making (MCDM) model that breaks the independent relationship between standards by using factor analysis, and the Decision-Making Trial and Evaluation Laboratory (DEMATEL) to effectively evaluate online education.Lin [8], and Qiu and Ou [9] applied the fuzzy analytic hierarchy process (AHP) to obtain the weights of the indicator system and to evaluate the quality of specific MOOC courses to improve the quality of MOOCs.These studies have improved the evaluation system of educational service quality to a certain extent.
This paper is divided into six sections.Section 1 introduces the research background of MOOCs, the current dilemma, the research purpose, and the significance.Section 2 reviews the literature and summarizes the research on the quality of MOOC platforms and related methods.Section 3 uses the BWM model to calculate the index weights and uses the following five MOOC platforms for empirical analysis: MOOC.CN, IMOOC, XuetangX, Study.163, and Open Learning.The data analysis is implemented in Section 4. Interpretation of results, theoretical applications and implications are discussed in Section 5. Finally, this paper summarizes the development status of MOOCs and provides suggestions for their future development.

Literature Review
MOOCs provide an opportunity to expand access and participation in education, and the massive open characteristic of MOOCs allows learners to control their learning progress.Huang et al. [10] studied the topic and concluded that the development of online education is closely related to the indicators that affect platform quality.Terras and Ramsay [11] emphasized the importance of designing, developing, and interacting with MOOCs from the perspective of learners' psychology and how learners could self-regulate and overcome psychological barriers to effectively using MOOCs.
Scholars have conducted research on the evaluation of MOOC education quality across various dimensions.Yousef et al. [6] identified six categories of standards for MOOC design quality, namely, instructional design, assessment, user interface, video content, social tools, and learning analysis.Lin [8] proposed dividing website quality into four standards: system quality, information quality, service quality, and attractiveness, which determine the relative weights of the website quality standards and improve the effectiveness of a website.Chiu et al. [12] claimed that satisfaction plays an important role in online learning.In addition, information, systems, services, procedures, interactions, and other factors could also affect learners' satisfaction.Qi and Liu [13] utilized latent dirichlet allocation (LDA), an autoencoder and text classification model, to establish a curriculum evaluation system based on MOOC reviews.By combining the theoretical framework, Drake et al. [14] proposed five principles for MOOC design, including meaningful, attractive, measurable, accessible, and extensible.Miranda et al. [15] proposed using data mining and fuzzy set methods to evaluate MOOCs and finally obtained an evaluation framework, including five first-level indicators of course content, instructional design, interface design, media technology, and curriculum management.Nie et al. [16] proposed a systematic approach to diagnose and evaluate the quality of MOOC courses.This method integrated standardized rubric, expert feedback, data mining, and emotion detection into AHP.Rong et al. [17] constructed an MOOC evaluation index system containing 6 elements and 16 indicators and used a multigranular unbalanced hesitation fuzzy linguistic term set (MGUHFLTS) to describe these indicators.
Many studies have indicated that the quality of education services is an important reason for improving learners' satisfaction and attracting new willing learners from various perspectives.Selim [18] used the usefulness and ease of the Technology Acceptance Model (TAM) to evaluate college students' acceptance of a course website and proved that practicality and ease of use are key factors in improving the acceptance of course websites.Through investigation and research, Sun et al. [19] showed that teachers' attitudes toward teaching courses, students' attitudes toward the curriculum, flexibility of use, and assessment diversity were key factors influencing learners' perceived satisfaction.
Yepes-Baldó et al. [5] provided some quality indicators related to instructional design and platforms for MOOC developers to better improve the education quality of online learning platforms.
In this paper, after referring to the relevant studies by the above experts, the MOOC platform quality evaluation index system is constructed using the four aspects of system function, teaching resources, teaching effect, and social interaction as well as 22 secondary indicators at the dimension level, as follows: When analyzing the quality of MOOC platforms, Lin [20] proposed system function reliability, accessibility, and acceptable response time as factors.These factors could help improve the learning outcomes of students on MOOC platforms.Lin [8] and Büyüközkan et al. [21] both mentioned the dimension of system function in their articles, but their focuses varied.Lin [8] mentioned updating learning materials and claimed that the update frequency was an important indicator for evaluating e-learning.MOOC platforms provide interface functions to access social networks (such as Twitter, Facebook, and LinkedIn), which can facilitate interaction between participants [2,3].In the opinion of Büyüközkan et al. [21], system confidentiality is an important foundation of website systems, and data confidentiality is the standard to evaluate whether a website is safe.Fesol and Salam [22], and Ossiannilsson et al. [23] discussed how the flexibility of learning can make learning more convenient and can save time for learners.Liu et al. [24] believed that the elements of interface design including the layout, navigation, and links would affect the user experience.Tools such as navigational maps or frames with indices are elements that can ensure quality and help students understand their progress, where they are, and what tasks remain to be completed [3,5,25].Based on the above literature, the indicators under the system function dimension were composed of flexibility of use, functional diversity, system reliability, system confidentiality, update frequency, and learning navigation.

Teaching Resources (X 2 )
High-quality teaching resources are the foundation for successful operation of MOOCs.Clear organization and the structure of course content are important indicators of the quality of teaching resources [5].Chiu et al. [12] believed that users can improve their professional skills and learning knowledge and that high-quality course materials were crucial to their learning satisfaction.Tzeng et al. [7] stated in their article that learners noted that course websites provided them with more accurate and complete information resources, which could help them better understand learning materials.Dehghani et al. [26] designed a conceptual model for identifying teachers' competence in MOOCs.They found that professional skills (instructional content development, instructional design, evaluation, communication, participation, management, and technical skills) were one of the main categories.Li et al. [27] and Li et al. [28] mentioned the importance of personalized learning.With the integration of MOOCs and adaptive learning technologies, MOOC users are able to easily access personalized support to effectively meet their learning needs.Students can adjust the learning content, learning processes, and activities according to their personal characteristics and learning difficulty.Castaño et al. [2] suggested that the use of diverse resources helps to focus attention on the curriculum.The richness of course resources is an important indicator [5].In addition to the accuracy, completeness, specialty, and personalized learning of teaching resources as indicators of quality, this paper also adds the richness of teaching resources, which gives a more comprehensive evaluation of teaching resources.

Teaching Effect (X 3 )
The teaching effect is the scale used to measure the quality of classroom teaching.Course evaluation mirrors course quality [29].Rotgans and Schmidt [30] showed that the instructional design elements of MOOCs gave learners autonomy, which may help motivate learners enjoy courses, spend more energy on understanding the learning content, continue to participate in classroom activities, and then positive learn emotions.Instructional design is an important part of course development, especially for MOOCs, which urgently require effective methods to attract many diverse learners [31].The instructional design quality is positively correlated with MOOC ranking [32].There is a direct link between a welldesigned course and the motivation of its participants.Thus, students can be motivated by the attractiveness of the course content [2].Therefore, the teaching effect included attraction, brand effect, instructional design, and course evaluation.

Social Interaction (X 4 )
Social interaction is an important dimension to evaluate the quality of MOOCs.When Marks [33] studied online learning, he found that there were three types of interaction behaviors: teachers with students, students with students, and students with content.The interaction between teachers and students mainly includes online and offline methods.High-quality learner activities and learner interactions are important parts of high-quality MOOCs [25].Fesol and Salam [22] proposed that online interaction could solve the current problem, which was to deepen mutual understanding between teachers and students in a timely manner.Yepes-Baldó et al. [5] used the communication of courses through offline resources (communication methods, flyers, posters, brochures, etc.) as one of the indicators to measure the quality of MOOC platforms.Liu et al. [24] believed that interactivity was another important factor affecting MOOC learners' engagement.In addition, Li et al. [28] believed that companies working with third-party testing institutions could address cheating and plagiarism in online testing, thereby enhancing academic integrity and enhancing users' trust in MOOC learning.Through the above analysis, this study found that user trust could also be used as an indicator of the social interaction dimension.
In summary, the specific references of the dimensions and indicator sources of this paper are listed in Table 1.[8,20] The richness of classroom resources (X 23 ) [2,5,9] Professional teaching skills (X 24 ) [26] Personalized learning (X 25 ) [27,28] Teaching effect (X 3 ) [10] Instructional design (X 33 ) [30-32] Evaluation of teaching curriculum (X 34 ) [29] Social interaction (X 4 ) User supervision (X 41 ) [5,28] Incentive measures (X 42 ) [7] Classroom interaction (X 43 ) [22,24,25,32] Offline communication (X 44 ) [5] User trust (X 45 ) [8,10,28] This paper selected five MOOC websites as research objects, namely, MOOC.CN (V 1 ), IMOOC (V 2 ), XuetangX (V 3 ), Study.163(V 4 ), and Open Learning (V 5 ).As these five MOOC platforms are well-known websites in China, they were established earlier in China and have many users and course resources.MOOC.CN offers rich online education resources, free learning, and a high degree of openness to users.IMOOC is a specialized internet IT skills learning website with more than 21.5 million users, more than 1500 cooperative lecturers, and more than 3000 self-produced courses.XuetangX is a MOOC platform initiated by Tsinghua University in October 2013.It is a research exchange and achievement application platform of the Online Education Research Center of the Ministry of Education.Study.163 is an online practical skills learning platform created by NetEase, which was officially launched at the end of December 2012.Open Learning is now the largest MOOC learning community in China.Open Learning has gathered nearly one million learners and is committed to providing Chinese users with a platform to select courses, to comment on them, and to communicate and share them.Although many MOOC platforms charge fees, IMOOC (V 2 ) is a free model.There are no additional services, such as follow-up protection, in the metrics.Therefore, this is the shortcoming of this paper when selecting dimensions and indicators.

The Research Methods
As a branch of the decision-making management research field, MCDM involves many methods, such as AHP, ANP, DEMATEL, TOPSIS, and GRAY.The MCDM model has been widely used in online education, e-commerce, supply chain management, energy management, and other industries [34][35][36].Sadi-Nezhad et al. [34] proposed the fuzzy analysis network process (FANP) model to evaluate network learning systems and used the model to evaluate the existing network learning platform of some universities.In this study, the best worst method (BWM) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) were adopted.The BWM is an improvement based on the AHP, which compares the two-to-two comparisons to the optimal and worst dimensions or indicators compared with the remaining dimensions or indicators.Using such processing reduces the number of steps to be compared, simplifies the calculation process, and greatly reduces the amount of data.Furthermore, the smaller the consistency indicator value is, the higher the reliability of the acquired data [37].You et al. [38] used the BWM method to determine the weights of the evaluation standard of power grid operations and evaluated the operating performance of power grid enterprises.Gupta et al. [39] and Kaa et al. [40] used the BWM method to determine the relative importance of green innovation factors, and the research contents were supplier selection based on green innovation and the construction of a green innovation framework.VIKOR is an MCDM method first developed by Opricovic in 1998.It is a method for solving complex decision situations [41].This method is used to solve discrete decision-making problems with immeasurable and inconsistent criteria [42].Shojaei et al. [41] used VIKOR technology to propose an evaluation and ranking model to rank the performances of some airports.

Introduction of BWM
For the best worst method (BWM), the decision maker does not need to compare all of the criteria as in the traditional analytic hierarchy process.After determining the best dimension, the worst dimension, the best criterion, and the worst criterion under each dimension through expert interviews, the method only needs to make pairwise comparisons between the best criterion and the worst criterion, and other criteria.As the BWM provides a more consistent comparison than the AHP, and the metric weights obtained by the BWM are highly reliable [43].The BWM could produce a single solution to two or more metric problems when comparing the system to fully meet any number of standards.For comparison systems with three or more inconsistent criteria, weights could be used as intervals where multiple optimal solutions were possible [44].The specific steps are as follows: Step 1: Determine the set of decision indicators.The decision maker identifies n indicators for decision making {Z 1 , Z 2 , Z 3 , . . . ,Z n }.
Step 2: Determine the best and worst indicators.The decision maker selects the best (the most desirable, preferred, or important) and the worst (the least desirable or least important) indicators from n indicators.
Step 3: The optimal indicators and other indicators are compared.Decision makers rank the relative importance of the optimal indicators on a scale of 1 to 9. For other indicators, the best-to-other (BO) variable obtained by the allocation is expressed as A b = (a b1 , a b2 , . . . ,a bn ), where a bj represents the importance of the optimal indicator b compared with indicator j.Obviously, a bb = 1.
Step 4: The other indicators are compared with the worst.Decision makers use a scale from 1 to 9 to show the relative importance of indicators other than the worst.The others-to-worst (OW) variable obtained is expressed as A w = (a 1w , a 2w , . . . ,a nw ) T , where a jw represents the degree of importance of indicator j compared with the worst indicator w.Obviously, a ww = 1.
Step 5: The optimal weight w * 1 , w * 2 , . . ., w * n is determined.The optimal weight is determined such that the maximum absolute difference between w b w j − a bj and w j w w − a jw for all j is the minimum.The optimal weight can be represented by a maximum and minimum model: ∑ j w j = 1 w j ≥ 0, which is true for all js. ( It can be solved by converting this into the following linear programming formula: w b w j − a bj ≤ θ, which is true for all js.
w j w w − a jw ≤ θ, which is true for all js.∑ j w j = 1 w j ≥ 0, which is true for all js. (2) For any value of θ, multiply the first constraint in Formula (2) w j and the second constraint w w to obtain the solution.The solution space of Formula (2) is the intersection of 2n − 3 (n represents the number of the indicators, and n ≥ 2) linear constraints.Therefore, if there is a sufficiently large θ, the solution space is nonempty.By solving Formula (2), the corresponding results of the optimal weights w * 1 , w * 2 , . . ., w * n and θ * are obtained.Definition 1: When a ik * a kj = a ij , it is sufficiently consistent that for all ks, a ik , a kj , and a ij are the preferences of the best indicator for indicator k.Indicator k is the preference of the worst indicator for the worst indicator.
Table 2 shows the maximum value of different values of θ (consistency indicator) for a ij .Due to the consistency index (Table 2), the consistency rate (CR) is calculated as follows: The consistency rate belongs to [0.1].The closer the value is to 0, the higher the consistency; conversely, the closer the value is to 1, the lower the consistency.
As mentioned above, Formula (2) can produce multiple optimal solutions.If you want to find the minimum and maximum values in the set w b w j − a bj , w j w w − a jw and minimize and maximize the set w b − a bj w j , w j − a jw w w , the problem can be expressed by the following formula: minmax j = w b − a bj w j , w j − a jw w w s.t.∑ j w j = 1 w j ≥ 0, which is true for all js.
(4) Formula ( 4) can be converted into the following linear equation: w b − a bj w j ≤ θ L , which is true for all js.w j − a jw w w ≤ θ L , which is true for all js.∑ j w j = 1 w j ≥ 0, which is true for all js.
(5) Formula ( 5) is a linear problem with a unique solution.It can obtain the optimal weights w * 1 , w * 2 , . . ., w * n and θ * .For this model, the closer the value of θ L * is to 0, the higher the consistency.

VIKOR
The VIKOR method is an effective tool for multiple criteria decision-making technology.The method is used for decision makers who cannot or do not know how to clearly express preferences or the situation of inconsistencies and conflict between the evaluation principles.The VIKOR method can address these problems so that decision makers can accept compromise solutions.Therefore, Chitsaz and Banihabib [45] stated that VIKOR provided a compromise ranking to decision makers based on "proximity" to "ideal" solutions.
First, the ideal solution and the negative ideal solution are defined.The ideal solution is the optimal value of all the evaluation criteria in each evaluation criterion, and the negative ideal solution is the worst value of all of the evaluation criteria in each evaluation criterion.All scenarios are evaluated according to each standard function, and the ordering is performed based on the proximity of the ideal solution.This method uses the L p -metric as the aggregate function: In the above formula, j is the scheme number; i is the evaluation criterion number; h ij represents the performance value of the jth alternative on the ith criterion; and h * i and h − i represent the best value and the worst value of all standard functions, respectively.P is the distance parameter of the aggregate function (generally 1, 2, or ∞; this paper takes 1), n is the number of criteria, w i represents the ith standard weight, and L p,j represents the distance from the solution to the ideal solution.
In Formulas ( 7) and ( 8) above, I 1 represents a set of revenue type criteria and I 2 represents a set of cost type criteria.Thus, the positive ideal solution and the negative ideal solution are calculated.
The second step is to calculate the group benefit S i (optimal solution) and individual regret R i (worst solution) of the comprehensive evaluation of the scheme.
where J = 1, 2, 3, . . ., j, where w i represents the weight of the ith indicator; S i represents the group benefit of the alternative, where the smaller the value of S i , the greater the group benefit; and R i represents individual regret, where the smaller the value of R i is, the smaller the individual regrets.The third step is to calculate the benefit ratio Q generated by each scheme. and where v represents the coefficient of the decision mechanism.If v > 0.5, it means that the decision is made according to the principle of benefits first; if v ≈ 0.5, the decision is made according to the principle of balanced compromise; and if v < 0.5, it means that the decision is made according to the principle of the cost supremacy decision.In this paper, v = 0.5, that is, the tradeoffs between benefits and costs are balanced.The fourth step is to sort the alternatives according to S j , R J , Q i .The fifth step is to sort according to the value of Q i when the following two conditions are met, and Q i is the minimum winning unit. Condition is the Q of the second scheme based on Q, Q(b) is the Q of the first scheme based on Q, and J is the number of all schemes.The difference between the two Qs in which the order is only one bit must exceed the value of 1/(J − 1) to determine the optimal scheme to sort the first scheme.If there are more than two schemes, the first scheme is sorted.The plan is compared with other scenarios to determine whether it meets Condition 1.
Condition 2: Admitted assurance of decision-making.After sorting according to Q, the S of the first-ranked scheme must be ranked better than the S of the second-ranked scheme or its R must be ranked better than the R of the second-ranked scheme.If there are more than two schemes, the first scheme is compared in order with other schemes to determine whether it meets Condition 2.
If the first-ranked scheme satisfies both Condition 1 and Condition 2, the first-ranked scheme is optimal.If the first-ranked scheme and the sorted second-ranked scheme or another scheme satisfy only Condition 2 but Condition 1 is not satisfied, the schemes that do not satisfy Condition 1 but satisfy Condition 2 are all optimal schemes.

Data Analysis
This paper completed the data collection process through expert interviews and questionnaires.Researchers designed two questionnaires: the first was a questionnaire that sought to compare the importance of various dimensions and secondary indicators of MOOC platform quality, and the second was a questionnaire regarding the performance of the quality evaluation of a MOOC website.The weighted questionnaire was designed based on the BWM method.The optimal and worst dimensions or indicators were selected in the four dimensions of the paper and the indicators in each dimension, and then, the optimal dimensions or indicators were compared with other dimensions or indicators.Finally, the dimensions and index weights were calculated.The questionnaire participants were all teachers or students who had a certain understanding of MOOCs.Both teachers from middle schools and professors from universities came from the MOOC Teaching Expert Database and were the creators and providers of MOOC courses.Each expert hosted more than three courses on average and had participated in the construction of other courses.They had more than 8 years of MOOC teaching experience.They had rich experience in MOOC course construction and teaching.We selected many students from various schools with different professional backgrounds as the research objects and included one on-the-job student.They participated in several MOOC courses during the semester.The specific information of the interviewees is shown in Table 3. Affected by the COVID-19 pandemic, the researchers interviewed ten experts through Tencent Conference and asked them to fill out questionnaires.A total of two questionnaires were designed for this study.Questionnaire 1 was used to evaluate the importance of the six dimensions and 23 criteria, and questionnaire 2 was used to evaluate the performance of the five entrepreneurial projects.It took a month from 20 March 2020 to 16 April to contact 10 experts to fill out the questionnaire.The specific implementation process is as follows.

Weight Calculation
The BWM model is used to calculate four dimensions to measure the quality of MOOC sites.These dimensions include system functions, teaching resources, teaching effectiveness, and social interaction, and flexibility, functional diversity, and 20 other indicator weights are used.The corresponding data were obtained by issuing and collecting questionnaires.The specific process is as follows: Step 1: Set the set of dimensions that affect the quality evaluation of a MOOC website as X.Then, the set is X i = {X 1 , X 2 , X 3 , X 4 }.The set of indicators is Z, and the set of all indicators is Z i = {Z 1 , Z 2 , Z 3 , . . . ,Z n }, i = (1, 2, 3, . . . ,n).
Step 2: Determine the best and worst standards.Ten experts selected the optimal dimension or indicator and the worst dimension or indicator from the dimensions and indicators.
Step 3: Establish the evaluation scale between the design indicators.The specific scoring scale is shown in Table 4. Step 4: Determine the preference of the best criteria to all other criteria.This article uses the evaluation scales in the table above to indicate the degree of preference.The result is the best-to-others (BO) vectors.
Step 5: Determine the preference of all criteria to the worst standard.This article used the evaluation scales in the above table to indicate the degree of preference.The result is the others-to-worst (OW) vectors.
This paper issued 10 questionnaires.The best and worst dimensions and indicators were selected by experts, and then, the best and worst dimensions or indicators were compared with other dimensions or indicators to obtain A b = (a b1 , a b2 , . . . ,a bn ) and A w = (a 1w , a 2w , . . . ,a nw ) T .Table 5 shows the dimensional comparison of an expert and its results.

BO
After comparing the dimensions, the paper compares the indicators under each dimension in the same way.Finally, according to the data of each expert, the weights and consistency rates are calculated.Table 6 shows the optimal weight for each dimension for each expert and the CRs.Through 10 sets of weights, their average weights are obtained and presented in the table.After comparing the dimensions, the paper compares the indicators under each dimension in the same way.To test the consistency of the pairwise comparisons mentioned in Section 3, the CR can be used as a measure of consistency.Using θ in the above table, the consistency ratio CR was obtained according to Formula (3).Obviously, the larger θ is, the higher the consistency ratio and the lower the reliability.Since the CRs listed in Table 6 are close to zero for all experts, it can be concluded that all 10 questionnaires have good consistency.That is, the questionnaire is valid, and the data have high reliability.
Step 5: The above table solved the dimension weights using step (5), and the weights of each dimension are W 1 = 0.104, W 2 = 0.520, W 3 = 0.296, and W 4 = 0.079.According to the formula Global Weight = Local Weight × Dimension Weight, the weight of each dimension and indicator is calculated and shown in Table 7: From Table 7, according to the dimension weights, the dimensions can be sorted as follows: X 2 > X 3 > X 1 > X 4 .In the same way, the table shows that, among the 20 indicators, content accuracy is the most important, and user trust and learning navigation are the least important.After deriving the weights of each dimension and indicator, the results can be used as a basis to analyze the performance of a MOOC platform.The following use VIKOR analysis to evaluate MOOC website performance.

Case Application
This paper takes the case of five MOOC websites with a high reputation in China: MOOC.CN (V 1 ), IMOOC (V 2 ), XuetangX (V 3 ), Study.163(V 4 ), and OpenLearning (V 5 ) The evaluation scale of the website quality evaluation is shown in Table 8.
The five websites' performance evaluation processes are as follows: (1) Establish a standardized evaluation matrix, which is derived from the recycled quality evaluation performance questionnaire of the MOOC platform.The number of websites evaluated is m, and the evaluation website B = {B 1 , B 2 , B 3 , . . . ,B m } T .The number of indicators evaluated is n, and the evaluation index C = {C 1 , C 2 , C 3 , . . . ,C n } T .Matrix D is an estimate of the metrics for all sites.Then, Equation ( 6) is used.
(2) Determine the positive ideal solution and the negative ideal solution for each indicator and obtain Table 9 according to Formulas (7) and (8).Due to the limited length of this paper, Tables 4-7 are the positive ideal solutions and the negative ideal solutions for the secondary indicators under the system function dimension.
(3) The group benefit S i , the individual regret R i and the comprehensive performance ratio Q j are calculated and ranked using Formulas ( 9)- (11) in Section 3, and the results are shown in Table 10.Table 10 lists the calculation results (note that v is 0.5).As described in Section 3, sorting is based on the values of Q, S, and R. The best website with the minimum Q (IMOOC V 2 ) is assumed to be b, and the second website (XuetangX V 3 ) is assumed to be a.To determine the best website, Condition 1 and Condition 2 should be met.Then, the value of Q(a) − Q(b) is equal to 0.156, which is not greater than DQ = 1/(5 − 1) = 0.25; therefore, Condition 1 is not satisfied.In addition, V 2 (IMOOC) is ranked first in S and R, so it is a stable choice in the decision process that satisfies Condition 2.
When Condition 1 is not met, the compromise solution is XuetangX.If "IMOOC" is considered an alternative, the value of XuetangX is Q(a) − Q(b) = 0.156, which is less than Figure 1 is obtained from the ranking tables of Table 13.The following figure can be used to judge the advantages and disadvantages of websites.When a user chooses a MOOC website for learning, according to all aspects of the comprehensive, the users take IMOOC (V 2 ) and XuetangX (V 3 ) as the first choices, and the order of the other three is Study.163(V 4 ), MOOC.CN (V 1 ), and OpenLearning (V 5 ).As seen from the figure below, this ranking is reliable.

Discussion
Based on the data collected by the indicators and performance questionnaires, the results shown above were obtained using the BWM and VIKOR methods.The results show that, among the four dimensions, teaching resources (W 2 = 0.520) have the largest weight, followed by the teaching effect (W 3 = 0.296), the system function (W 1 = 0.104), and finally social interaction (W 4 = 0.079).As can be seen from the reason why IMOOC (V 2 ) ranks first in the performance analysis, the rich content, extensive resources, and accurate and authoritative content of the website are important factors for users to be more confident and trustworthy in the course.The teachers on IMOOC are mainly industry experts and are authoritative.Users benefit considerably from the classrooms.Therefore, the courses of IMOOC are often given "lively and interesting" evaluations, which also increases the competitive advantage of IMOOC.XuetangX is to the best website.Based on the various dimensions and indicators, the figures are relatively average, and the overall level is better.Therefore, users are very concerned about whether the content of the website is complete and accurate, and whether the teaching skills of the instructors are of a professional level.These are considered important factors when evaluating the website.
The teaching effect is a comprehensive evaluation of the curriculum.The weight of the instructional design ranks second among all of the index weights.Instructional design includes the teaching content, the design of the video courseware, the design of the teaching objectives, the characteristics of the subject, and other knowledge development, which are additional points that attract users.IMOOC (V 2 ) uses famous teachers, so its performance in the dimension of the teaching effect is among the best.Due to the limitations of internet technology, the performance data of the system functions are distributed evenly among the five websites.With the constant innovation of information technology, the system function needs to keep improving.Compared with the traditional teaching model, online education has certain risks.Platforms should pay attention to the reliability of the systems so that users can trust and continue to use them.By strengthening the confidentiality of the platforms, and the diversity of functions and other services, the satisfaction of users can be improved.
The results from computing and analyzing data show that the scores of the five indicators of social interaction (user supervision, incentives, classroom interaction, offline communication, and user trust) are significantly lower (the averages of the upper bounds of the scores given by the 10 experts are G 41 = 4.7, G 42 = 5, G 43 = 5, G 44 = 4.6, and G 45 = 5).This shows that MOOC websites still lack social interaction.In the dimension weight analysis, the proportion of social interaction is small.After all, users visit MOOC platforms mainly for learning, and they are not concerned about social interaction.However, with the rapid development of networks, the author believes that the proportion of social interaction will be slowly improved.Although online education classrooms are not as rich as traditional education classes, they can be expanded by using after-class interactions between teachers and students, by using the after-class interaction between students and students, and by designing a simple and convenient interface so that users are willing to participate in the interaction and to enhance the users' experience.The social interaction performance of Study.163(V 4 ) is the best among the five websites.After-school interactions include course teachers answering questions on the BBS, assigning and grading homework, etc.; however, other websites lack a mechanism for effective communication with teachers.
In summary, by establishing a quality evaluation index system for MOOC platforms, this paper provides MOOC developers with some quality indicators related to platform construction and course design to help them better plan, design, and organize implementation, which has a positive impact on improving the teaching quality of MOOC platforms.The number of online self-learners, course providers, and online platforms for MOOCs has grown significantly in recent years.Quality evaluation of MOOC platforms has become very important.After referring to the relevant research of the above experts, this paper constructed the MOOC platform quality evaluation index system from the 4 aspects of system function, teaching resources, teaching effect, and social interaction as well as 22 secondary indicators at the dimension level.Then, the BWM method was used to calculate the weight of each dimension that affected the MOOC platform quality evaluation.Then, using VIKOR analysis, this paper analyzed the indicators of five MOOC websites and tried to obtain a reasonable evaluation of their index system.
The research results could provide the following advantages: (i) the appropriate quality evaluation criteria for MOOC platforms can be determined, (ii) advanced models can be applied to determine the weights of the dimensions and criteria of the evaluation system, (iii) a highly reliable evaluation of MOOC platform performance can be provided, and (iv) targeted suggestions for improving MOOC platform construction based on expert judgment can be provided.The evaluation system could provide guidance for the construction of MOOC platforms.

Conclusions
This paper constructed a MOOC platform quality evaluation index system.Through the analysis of 20 indicators such as teaching resources, teaching effects, system functions, social interaction, content accuracy, and teaching resource richness, the results show that the most important dimension affecting the quality evaluation of MOOC platforms is teaching resources.Then, empirical analysis was conducted on a case of five MOOC websites, and the performance rankings of the five websites were obtained.
Teaching resources are the most important indicator affecting the quality of a MOOC platform.The teaching content should be accurate, should pay attention to the quality and source of the course, and should provide more open resources to enhance the loyalty of users.While improving the quality of the course, it is also necessary to pay attention to system functions, to provide targeted services for users, to meet the needs of users, and to further maintain and upgrade the diversity of the functions and confidentiality of the system to enhance users' satisfaction.Emphasis on social interaction, teachers and students through network exchanges and interactions, as well as the construction and maintenance of forums and discussion forums solves student problems in a timely manner and enhances students' experience.
This paper used the BWM and VIKOR hybrid model to study the performance of domestic MOOC platform quality evaluation.The BWM relies on expert analysis and selection; thus, the data obtained have certain subjectivity, and the data may be slightly biased.The 10 experts who completed the questionnaire in this paper were mostly students and teachers.If we interview scholars who have performed some research in the field of MOOCs, the data obtained will be more authoritative.Therefore, the questionnaire obtained in this paper has certain limitations.In addition, the five websites for the performance evaluation of this paper are limited to domestic platforms, and the quality evaluation systems of the constructed MOOC platform may not be applicable to international platforms.Subsequent research can build a more complete dimension and indicator system and can select a more comprehensive MOOC platform to calculate performance.The results will be more reliable and accurate.Therefore, it is recommended that follow-up scholars study in this direction.

Table 1 .
Dimensions and indicators of MOOC platform quality evaluation.

Table 3 .
Basic information of experts.

Table 6 .
Comparison of dimension weights.

Table 7 .
Weights and rankings of dimensions and indicators.

Table 9 .
Positive and negative ideal solutions for evaluation indicators.