Next Article in Journal
Exploring the Changing Gap of Residential Energy Consumption per Capita in China and the Netherlands: A Comparative Analysis of Driving Forces
Previous Article in Journal
The Return on Investment in Social Farming: A Strategy for Sustainable Rural Development in Rural Catalonia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Confirmatory Factor Analysis of a Questionnaire for Evaluating Online Training in the Workplace

by
Javier Rodríguez-Santero
,
Juan Jesús Torres-Gordillo
* and
Javier Gil-Flores
Department of Educational Research Methods and Diagnostics, Educational Sciences Faculty, University of Seville, 41013 Seville, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(11), 4629; https://doi.org/10.3390/su12114629
Submission received: 5 May 2020 / Revised: 30 May 2020 / Accepted: 3 June 2020 / Published: 5 June 2020
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

:
(1) Background: The objective of this research is to analyse the validated psychometric characteristics of a reduced version of the Questionnaire to Evaluate Online Training in the Workplace (CEFOAL), developed to evaluate the impact of online training processes in terms of satisfaction with lived experience. (2) Methods: This instrument has a factor design structure of five latent factors, obtained through exploratory factor analysis (EFA). The factors are pedagogical design, tutor performance, virtual environment design, timing, and transfer of learning. The questionnaire was administered to a sample of 471 participants several months after they took courses on occupational health and the environment. The courses were provided through the ISTAS (Trade Union Institute for Labour, Environment and Health; Spain) e-learning platform. Subsequently, confirmatory factor analysis (CFA) was performed using the maximum likelihood method. (3) Results: We were able to explain 71.58% of the total variance. Reliability, calculated with Cronbach’s alpha, achieved an overall value greater than 0.90 (α = 0.95). (4) Conclusions: This valid and reliable questionnaire, which incorporates a dimension that measures learning transfer to the job, can be applied in the evaluation of online training processes.

1. Introduction

Today, e-learning is used in many educational contexts. The related scientific research has focused on establishing which factors are important for the effective management and implementation of online training [1,2,3,4,5,6,7]. One of the most fruitful lines of research on online learning focuses on evaluating the impact of training on satisfaction with lived experience [3]. This approach to online training evaluation is employed in this study. We are aware that when considering student satisfaction with online learning, we are addressing a complex and uncertain set of circumstances because online courses involve multiple commitments [8]. A negative perception can result in unfavourable learning outcomes, including decreased motivation and persistence [3] and consequently dissatisfaction with online training.
Satisfaction can be determined by the degree of congruence between users’ previous expectations associated with their online training experiences and the results obtained at the end of the training [9]. Meta-analyses by Allen et al. [1], Williams [6], and Kauffman [3] clarify the factors that predict satisfaction with online training. Allen et al. [1] reviewed 450 studies, basing their analysis and subsequent publication on 24 of the studies that were published between 1989 and 1999. The requirement for a study to be included in their meta-analysis was that it compared student satisfaction in a distance education course with student satisfaction in a course conducted using traditional classroom methods. The researchers omitted case studies on distance education that did not use a control group, that did not provide sufficient statistical information to calculate effect size, that focused on teacher and administrator attitudes, and that examined persistence, although the latter may be an indicator of satisfaction, as Croxton [10] and Kranzow [11] conclude. The interaction level in live courses and the communication channel employed (i.e., a preference for audiovisual interaction over written instructions) proved to be good indicators of satisfaction. Williams [6] compared 25 studies published between 1990 and 2003 that examined the performance of students in the health profession. The approach to examining the effectiveness of and satisfaction with distance education in healthcare courses was based on the types of student involved, the teaching models that were used, and the components of the implemented didactic design. Real-time and synchronous teaching models offered a greater degree of interaction and tutoring by teachers. The greatest influence on satisfaction levels was the type of interaction and information offered (i.e., components of the didactic design).
More recently, Kauffman [3] performed a narrative synthesis of over 25 studies published between 2001 and 2013. The analysis used three categories: learning outcomes, instructional design, and learner characteristics. Students were satisfied with online courses characterized as structured, interactive (i.e., using a constructivist instructional design), relevant (i.e., practically significant), and having tutors who facilitated interaction and feedback. The study concluded that the factors most favourable to student satisfaction with online courses were the suitability of didactic methods, tutor/student support, and course structure/design [12].
The results of these meta-analyses indicate that distance learning does not diminish student satisfaction compared with face-to-face classes [1] and that such satisfaction may even be greater [3]. There is even a small positive effect on the performance of distance students compared with classroom students [6]. The strongest explanatory factors concern what we term the pedagogical design of the course, tutorial performance, and the design of the virtual environment; we leave the timing indicator for more thorough future assessment [3].
In the scientific literature, three prototypical groups are recognized: a higher proportion of students who are satisfied with their experience, a smaller proportion who are genuinely dissatisfied, and ambivalent students who simultaneously express positive and negative feelings regarding their online experience [8]. An example of this ambivalence is when a student appreciates the convenience of online learning but misses face-to-face interaction with the tutor. Thus, when students provide evaluations that are closer to the extremes of the satisfaction scale, their course assessments tend to be more intuitive. As their level of ambivalence increases, they become more analytical and specific, making separate and independent judgements regarding course quality [8].
Online students who receive support from tutors and peers and who maintain a high degree of communication with both of these actors tend to display higher degrees of satisfaction with online courses [13,14,15,16,17]. Swan [13] found that clarity of design, interaction with the tutor, and active discussion among participants are three factors that significantly influence student satisfaction and perceived learning in online asynchronous courses. Higher levels of personal activity in the course and perceived interaction with tutors and peers result in greater satisfaction and perceived learning. The importance of interaction among participants is also a key factor in Arbaugh’s [18] research. In a study by Swan [13], most students believed that their levels of interaction with course materials, their peers, and the tutor were greater than in traditional face-to-face courses.
Khalid [19] analysed Garrison, Anderson, and Archer’s Community of Inquiry (CoI) model [20,21] of satisfaction with online courses, a model that explains success in online teaching-learning processes. He found that only teaching presence and social presence were predictors of course satisfaction. Unlike in Rubin et al. [15] and Bulu [22], cognitive presence was not found to be a significant predictor of satisfaction. Khalid’s study was conducted with Malaysian students, while the other two studies were conducted with American and Turkish students, respectively. It appears that for Malaysian students, a high or low level of cognitive presence does not influence course satisfaction.
The findings of Artino [23] indicate that the satisfaction of students with online courses can be explained in part by their beliefs and motivational attitudes towards learning. Artino’s regression model explained 54% of the variance with seven predictors: four control variables (including experience with technology and online learning) and three components of academic self-regulation (perception of instruction quality, self-efficacy, and the attributed value of the task).
Traditionally, the transfer of learning has been considered the ultimate goal of any training process [24]. By transfer of learning, we understand the productive application of new knowledge, skills, and attitudes to the field of work [25,26,27,28,29,30]. Consequently, the motivation to transfer is defined as the conscious desire to use what is learned in training in the workplace [31,32,33,34].
Transfer factors are classified into three major groups by most studies: personal factors, training design factors, and workplace and organizational factors [35]. Satisfaction and transfer are reinforced when online training is perceived by workers as a flexible and high-quality process [7] with adequate feedback channels that help define learning objectives, improve results, and self-regulate learning [36].
In studies conducted to evaluate satisfaction with online training, the satisfaction with the training variable was correlated with transference [37,38,39]. The educator’s challenge is not only to impart knowledge that the participants wish to learn but also for participants to react favourably to their training [40]. Singleton [41] evaluated a workplace training programme, using the already presented CoI model in the course design. Her results also pointed out that the CoI model (cognitive, social, and teaching) was a good basis for designing online courses for the workplace, with a variation: she divided the teaching presence into two presences, creating a design presence to distinguish between instructional design and content delivery.
Many of the activities that are performed in the workplace and those that we learn about are unintentional and fall under what is known as workplace learning [42]. Moore and Klein concluded that practitioners use various methods to encourage workplace learning, such as sharing knowledge, chatting and asking questions, encouraging or promoting informal learning activities, or creating and curating materials and learning objects to support their teams’ informal learning. However, training in organizations mostly takes the form of formal training [42].
In addition, Riley [43] obtained a negative correlation between specific job satisfaction variables and the overall wellness of counsellor educators. She suggested that paying too much attention to one area such as job satisfaction, could negatively affect overall wellness.
A variety of instruments are used to measure satisfaction with online training courses. Elliott and Shin [44] propose an instrument with 20 items. They start by evaluating satisfaction with online experiences through a single overall assessment item included at the end of a questionnaire. Such an assessment tool is of limited value because it does not always reflect satisfaction or dissatisfaction with all the elements involved in online training (e.g., tutoring, didactic design, communication, platform use, administrative management of the training). If a student experienced a problem with an aspect of the course (for example, an inability to access the course platform for two days due to a technical failure), he or she could evaluate the entire course as unsatisfactory based on only that attribute. In presenting their alternative questionnaire to these single-item assessment instruments included at the end of the questionnaire, Elliott and Shin conclude that the five most critical factors affecting student satisfaction are the value of course content, the registration process, teaching excellence, the opportunity to take desired classes, and the student placement rate.
Arbaugh [18] used a Likert-type questionnaire with responses ranging from 1 (strongly disagree) to 7 (strongly agree). The questionnaire consisted of seven scales: usefulness, ease of use, course flexibility, programme flexibility, difficulty of interaction, performance of the tutor in the interaction, and use of the course’s website, with student satisfaction being the dependent variable. Similarly, Swan [13] applied a four-choice Likert-type response instrument with five scales: course satisfaction, perceived learning, personal activity in the course, perceived interaction with the tutor, and perceived interaction with peers. Lee, Srinivasan, Trail, Lewis, and López [14] designed a questionnaire on perceptions of support and satisfaction with an online course that included the following dimensions: instructional support, peer support, technical support, and course satisfaction. Three of the dimensions were positively correlated with satisfaction with the course.
One of the most prominent studies of student satisfaction with virtual courses is by Sun, Tsai, Finger, Chen, and Yeh [45]. This study revealed seven critical factors that affect students’ perceived satisfaction with e-learning: the student’s anxiety about the computer, the e-learning tutor’s attitude, course flexibility, course quality, perceived usefulness, perceived ease of use, and diversity in assessments. Zambrano [7] replicated the study by Sun et al. [45] in a Spanish-speaking context. He obtained similar results, noting that the factors most strongly related to student satisfaction were course flexibility and course quality, whereas anxiety did not correlate significantly with satisfaction. Similarly, Arbaugh [18] found that course flexibility and the ability to develop an interactive environment played stronger roles in student satisfaction than ease or frequency of use of the environment. However, satisfaction with the learning environment was found to affect satisfaction with the course [15].
Focusing on transfer to the workplace, the Questionnaire on Work and Learning Habits for Future Professionals validated in the Spanish context [46] offers 47 items and four dimensions: self-perception, information management, learning process management, and communication, although its focus is more on the construction of Personal Learning Environments [47] by the trainees.
Torres-Gordillo and Cobos-Sanchiz [48] presented the Questionnaire to Evaluate Online Training in the Workplace (CEFOAL). This instrument is adapted to the socioeconomic and cultural index (ISEC) of the Spanish context. In its design, a literature review was performed from which a matrix was obtained with the main elements of the didactic process of online training [49,50,51]. The results of an exploratory factor analysis (EFA) of the instrument by the authors of this study [5] revealed five factors: pedagogical design, tutor performance, virtual environment design, timing, and transfer of learning. Pedagogical design includes items related to the teaching-learning process. Tutor performance focuses on student-tutor interaction. Virtual environment design refers to the structure and resources offered by the platform for online training. Timing refers to the adequacy of the time provided during training. Finally, transfer of learning is understood as the application in the workplace of the knowledge acquired in the training.
In Spain, there is a need to measure the satisfaction with and impact of online training [5,48] due to the increase in online training over the last two decades, and there are a small number of instruments to assess what the CEFOAL measures. However, the factorial structure of the latter instrument has not been confirmed. Other instruments measure only partial aspects of what the CEFOAL intends. This instrument integrates satisfaction, impact, and transfer of learning in the workplace in online courses. Therefore, the aim of the study is to validate the factor structure of the CEFOAL by confirmatory factor analysis (CFA) to provide evidence to support the validity of the dimensions established by its authors or, where appropriate, to propose an alternative structure. The relevance of the problem is justified in the attempt to simplify this instrument to obtain a version that is easier to apply and has a greater chance of being completed by training-course participants without excessive loss of information about the main aspects assessed. The added value of this research is therefore reflected in this attempt to offer a validated and confirmed instrument to be applied to any online training course when the following three important elements are to be determined: satisfaction, impact, and transfer of learning in the workplace. Furthermore, this research corresponds to the demands of most educational institutions today to evaluate their training processes and the results achieved.

2. Materials and Methods

2.1. Participants

The instrument, which was developed over a year through the ISTAS (Trade Union Institute for Labour, Environment and Health) e-learning platform, was administered to a sample of 471 participants in ISTAS-taught online courses. Under the assumptions of simple random sampling, this sample size allows for 95% confidence, an error within ± 4%, and P = Q, considering that the student population that received the online training consisted of 1769 subjects.
Since 2000, the ISTAS Foundation, based in Valencia (Spain), has provided online training courses on occupational health and the environment for the Workers Commission trade union throughout Spain. The ISTAS Foundation is an autonomous foundation with the general objective of promoting social progress to improve working conditions, protect the environment, and support worker health.
Regarding the demographic characteristics of the sample, 54.1% were men, and 45.9% were women, with a mean age of approximately 40 years (mean: 40.5; standard deviation: 7.9).

2.2. Instrument

The analysed instrument is the Questionnaire to Evaluate Online Training in the Workplace (CEFOAL), designed by Torres-Gordillo and Cobos-Sanchiz [48]. The CEFOAL consists of 24 items designed to gather information on student satisfaction with online courses. Each item offers a statement in response to which the participant expresses his or her degree of agreement, with options ranging from 1 (strongly disagree with the item content) to 4 (strongly agree with the item content). The 24 items of the questionnaire are grouped in five dimensions, which were obtained using EFA: pedagogical design, tutor performance, virtual environment design, timing, and transfer of learning (Table 1). The instrument was developed with the intention that it could be used to evaluate any online training course.

2.3. Process

The assessment scale was completed by 471 subjects, who received a link to the electronic version of the original instrument via email several months after the conclusion of their courses. All the research participants were adults and understood both the nature of the study and the conditions of their participation. Participation was voluntary and complied with the rules of informed consent. This study followed the internal regulations for the social sciences of the Ethical Committee on Experimentation of the University of Seville (Spain).

2.4. Data Analysis

We used Bartlett’s sphericity test and the Kaiser-Meyer-Olkin test to rule out that the correlations between the items constitute an identity matrix, which would discourage the use of factor analyses. The degree of deviation of scores from normal distribution was analysed by examining asymmetry and kurtosis.
To confirm the construct validity of the instrument, i.e., the factor structure based on the five latent factors, CFA was performed using the maximum likelihood method. However, we first used a unifactor model as a null hypothesis, according to which there is a single factor in which all the items are saturated. The rejection of this model for lack of goodness of fit would imply the validity of continuing to investigate models with more factors. In the second model, the five components are included. Each item loads onto only one latent variable, the factors covariate, and the error terms are not correlated. In addition to the five common factors, a third model includes correlations between errors (EC) but only for the four item pairs with the highest modification indices (item2-item3, item5-item6, item6-item7, and item17-item18).
Figure 1 graphically shows the structure of the final model (five EC factors).
Based on the modification indices, the significance of the parameters associated with covariance among single factors was also evaluated. To evaluate the goodness of fit of each of the three models, the ratio between χ2 and degrees of freedom, the comparative fit index (CFI), the goodness-of-fit index (GFI), the root mean square residual (RMR) fit, and the mean square error of approximation (RMSEA) were considered, following the criteria for goodness of fit proposed by Boomsma [52]. Standardized correlations between factors and those between variables and factors were also studied to confirm, on the one hand, the construct validity of the instrument and, on the other hand, the discriminant validity of the instrument. Particular attention was paid to the latter if the correlation between the latent variables, attenuated by the measurement error (±2 times the measurement error), was less than the base unit.
We used Cronbach’s alpha values to estimate the reliability of the overall instrument and of each of the considered dimensions.

3. Results

3.1. Item Description

Table 2 presents the central tendency statistics and the forms of the items that compose the evaluation instrument. According to the obtained values, the asymmetry of the items is negative but in no case exceeds −1. For 70% of the items, asymmetry occurs in absolute values of less than 0.6. These results suggest the absence of significant deviations from normal distribution. Regarding kurtosis, only one item exceeds the limits of the interval [−2, 2]. Multivariate kurtosis, calculated from Mardia’s coefficient, was 211.18. Based on the criterion proposed by Bollen [53], this value is compatible with the multinormality of the set of observed variables because it is less than p·(p + 2), where p is the number of variables.

3.2. Factor Structure

To ensure the feasibility of the factor analysis, the correlation matrix was evaluated. Bartlett’s sphericity test (χ2 = 8186.023; gl = 276; p < 0.001) affirmed that the matrix was not an identity matrix, while the Kaiser-Meyer-Olkin sampling adequacy measure (KMO = 0.945) attained a value close to 1. Both results indicate that it is possible to extract factors from the matrix of observed correlations.
The results (Table 3) indicate a lack of fit for the unifactor model, enabling the unidimensionality of the scale to be rejected and the existence of differentiated measures for different factors to be proposed. Based on criteria provided by various authors [54,55,56], goodness of fit is best for a final five-factor model with EC. In this model, the ratio between χ2 and degrees of freedom is 2.272, a value that does not exceed the limit of 3, indicating a good fit between the proposed model and the observed data. The CFI comfortably exceeded the 0.900 level, a value from which an acceptable fit can be considered. The GFI provided three subscales of coefficients over 0.9 (GFI = 0.911); one is close to 0.9 (GFI = 0.89), and one is below 0.9 but acceptable for research purposes (GFI = 0.78). The RMR value is maintained below 0.050, also indicating a better fit in the final model. Finally, an RMSEA below 0.080 is considered acceptable, whereas an RMSEA value closer to 0.050 is considered optimal. According to these indices, the five-factor model with EC provides the best fit.
Table 4 presents the standardized coefficients of correlation between factors and those between variables and factors. The resulting factor loadings for each variable reached high values. With the exception of three items from the learning transfer subscale, the indicators have loadings that equal or exceed 0.70. All indicators were statistically significant, with p < 0.01. The lowest loading is for item 21 (The lesson learned in the course seemed easy to apply to my daily practice), with a value of 0.592 in the subscale transfer of learning. The highest is 0.958, recorded for item 11 (The time scheduled in the course for working on course content was adequate) in the subscale timing. The obtained results support the five-factor structure proposed for the instrument and thus provide strong evidence of construct validity for the EC model.
The correlations between the factors (Table 5) reach moderate values, ranging between 0.481 (correlation between the transfer of learning and timing subscales) and 0.733 (correlation between virtual environment design and tutor performance). The absence of correlations between factors close to 1 rules out the possibility that two factors represent the same dimension. This outcome supports the discriminant validity of the instrument, which would have sufficiently different dimensions.

3.3. Reliability

The reliability of the instrument (Table 6), estimated from its internal consistency, is excellent, exhibiting a Cronbach’s alpha value greater than 0.90. In each of the subscales, the values are also equal to or greater than 0.90. An exception is the transfer of learning scale, which with a value α = 0.78 remains at an acceptable level and very close to what could be considered good [57].

4. Discussion and Conclusions

An important issue related to e-learning is how to verify whether the online training experienced by users meets their expectations. In addressing this issue, we confirmed the factor structure by CFA. Thus, the article provides a valid and reliable tool for evaluating online courses in the work environment.
In this paper, we reviewed different instruments used to evaluate student satisfaction with online training [7,13,14,15,16,17,18,19,23,29,35,39,46]. The instrument we present contributes to our scientific field by addressing relevant factors, such as pedagogical design, tutor performance, timing, virtual environment design, and transfer of learning, that are regarded in the scientific literature as determining levels of satisfaction with online courses. This instrument can be used to evaluate any online training process.
In comparison with these instruments reviewed in this article [7,13,14,15,16,17,18,19,23,29] and especially those focused on the Spanish context [35,39,46], the CEFOAL includes new factors integrated in the same instrument with high values in the loadings of each factor, which makes it an innovative instrument. Unlike the Questionnaire of Transfer Factors [35] and the FET (Factors for the Evaluation of Transfer) model [38,39], which only focus on the transfer of learning, the CEFOAL is a more complete instrument to be considered in the evaluation of online training in any context. The questionnaire by Feixas et al. [35] focusses exclusively on the university environment. With respect to pedagogical design, the most significant improvements are in the objectives fulfilled, their correspondence with the contents, and the coordination between the different content sections.
Where the CEFOAL truly attains high values, however, is in tutor performance, virtual environment design, and timing, which the other instruments do not contemplate. Although we focus on e-learning and virtual environments in which independent learning (involving, e.g., self-regulation, self-motivation, time management, multitasking) plays an important role [3], tutoring remains a primary element in online training. In addition, online feedback can be provided quickly and is transmitted through agile channels [3,36]. The CEFOAL provides very high values in the communication channels and the resources offered to manage learner self-learning.
Another contribution of our instrument is its incorporation of a factor related to timing. One of the complaints of learners most often mentioned in the scientific literature is related to frustration or poor design [3], which implies extra hours of dedication with respect to what was planned by the teachers or training designers. Measuring the time dedicated to the course with the timing factor is an added value of the instrument. Timing was not incorporated into, or even indicated as being necessary to include for further study, the reviewed instruments [3]. Thus, inclusion of this factor in evaluations of online training is an important contribution.
In addition, the instruments analysed in our literature review were developed mainly based on samples of students from different academic levels, particularly the bachelor’s and master’s degree levels. Our instrument also incorporates the factor of learning transfer to the job. This feature makes it valuable in the context of non-formal training, unlike other instruments aimed only at university training [35]. Our instrument can be used to evaluate the satisfaction of workers with continuous online training received in institutions and companies. The design of training and learning (pedagogical design) appears to be a factor related to transfer, a result also found by Feixas et al. [35], and the correlation between these factors was among the highest (.727) observed. Another important correlation is between transfer of learning and the design of the virtual environment, which reflects how sensitive students are to the online learning environment [15]. Therefore, the incorporation of transfer of learning is very prominent, at the same level as other specific instruments [38,39], in an instrument that also aims to assess the satisfaction and impact in the training reaching acceptable values.
Regarding this study’s limitations, the collected data were drawn from assessments provided by participants in the studied training processes. We were unable to test the factor referred to as transfer in situ, a limitation that Quesada-Pallarès [33] notes, observing the temporary and economic cost of studying learning transfer and the difficulties involved in finding access to companies. The difficulty of collecting data in situ, a common problem for the scientific community, is an undesirable handicap that substantially affects socio-educational research in general.
With these results and with the current increase in online courses in companies and training institutions, the need for online evaluation is becoming more visible than ever. What a few years ago was perceived as a limitation due to being online is today being overcome by the need to incorporate online evaluation in all training institutions and companies. Companies themselves analyse their investment in training when they have evidence that learning is transferred to the workplace [58]. In such environments, the use of this instrument would be very valuable for improving training actions and for the confidence that companies have in the profitability of putting it into practice, both for economic improvement and for improving products. Both universities and other institutions that offer on-the-job training can consider the suitability of applying the CEFOAL to determine satisfaction, impact, and transfer of learning.
This research should continue with more studies using different participant samples (e.g., students training in different topics), implementing varied didactic designs, organizing training times in different ways, or involving more tutoring (thereby developing new tutoring methods). Additionally, how feedback quality affects satisfaction represents a further topic for consideration. Input from workers regarding their satisfaction with the transfer of learning to their jobs can also be assessed. More research is also required to determine whether the use of technology and online training meets the expectations of students with disabilities [3]. As the authors contend, an important challenge for institutions is to design courses to meet the needs and expectations of all students.

Author Contributions

Conceptualization, J.R.-S., J.J.T.-G. and J.G.-F.; data curation, J.G.-F.; formal analysis, J.R.-S., J.J.T.-G. and J.G.-F.; funding acquisition, J.J.T.-G.; investigation, J.R.-S. and J.J.T.-G.; methodology, J.R.-S., J.J.T.-G. and J.G.-F.; project administration, J.J.T.-G.; resources, J.J.T.-G.; software, J.R.-S.; supervision, J.J.T.-G.; validation, J.R.-S. and J.G.-F.; writing—original draft, J.R.-S., J.J.T.-G. and J.G.-F.; writing—review and editing, J.R.-S., J.J.T.-G. and J.G.-F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the ISTAS (Trade Union Institute for Labour, Environment and Health) Foundation by private grant. The APC was funded by the authors.

Acknowledgments

This work was supported by the ISTAS Foundation. Any opinions, findings, or conclusions expressed in this paper are those of the authors and do not necessarily reflect the views of the foundation. ISTAS had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors thank ISTAS for providing access to the facilities in which the research was performed.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Allen, M.; Bourhis, J.; Burrell, N.; Mabry, E. Comparing student satisfaction with distance education to traditional classrooms in higher education: A meta-analysis. Am. J. Dist. Educ. 2002, 16, 83–97. [Google Scholar] [CrossRef]
  2. Joo, Y.J.; Lim, K.Y.; Kim, S.M. A model for predicting learning flow and achievement in corporate e-Learning. Educ. Technol. Soc. 2012, 15, 313–325. [Google Scholar]
  3. Kauffman, H. A review of predictive factors of student success in and satisfaction with online learning. Res. Learn. Technol. 2015, 23, 1–13. [Google Scholar] [CrossRef] [Green Version]
  4. Moore, M.G. (Ed.) Handbook of Distance Education, 3rd ed.; Routledge: New York, NY, USA, 2013. [Google Scholar]
  5. Rodríguez-Santero, J.; Torres-Gordillo, J.J. La evaluación de cursos de formación online: El caso ISTAS (Evaluation of Online Training. ISTAS Case). Rev. Educ. Distancia 2016, 49. [Google Scholar] [CrossRef]
  6. Williams, S.L. The effectiveness of distance education in allied health science programs: A meta-analysis of outcomes. Am. J. Dist. Educ. 2006, 20, 127–141. [Google Scholar] [CrossRef]
  7. Zambrano, J. Factores predictores de la satisfacción de estudiantes de cursos virtuales (Prediction factors of student satisfaction in online courses). Rev. Iberoam. Educ. Distancia 2016, 19. [Google Scholar] [CrossRef] [Green Version]
  8. Dziuban, C.; Moskal, P.; Kramer, L.; Thompson, J. Student satisfaction with online learning in the presence of ambivalence: Looking for the will-o’-the-wisp. Internet High. Educ. 2013, 17, 1–8. [Google Scholar] [CrossRef]
  9. Allen, M.; Omori, K.; Burrell, N.; Mabry, E.; Timmerman, E. Satisfaction with distance education. In Handbook of Distance Education, 3rd ed.; Moore, M.G., Ed.; Routledge: New York, NY, USA, 2013; pp. 143–154. [Google Scholar]
  10. Croxton, R.A. The role of interactivity in student satisfaction and persistence in online learning. MERLOT J. Online Learn. Teach. 2014, 10, 314–324. [Google Scholar]
  11. Kranzow, J. Faculty leadership in online education: Structuring courses to impact student satisfaction and persistence. MERLOT J. Online Learn. Teach. 2013, 9, 131–139. [Google Scholar]
  12. Dabbagh, N. The online learner: Characteristics and pedagogical implications. Contemp. Issues Technol. Teach. Educ. 2007, 7, 217–226. [Google Scholar]
  13. Swan, K. Virtual interaction: Design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Educ. 2001, 22, 306–331. [Google Scholar] [CrossRef]
  14. Lee, S.J.; Srinivasan, S.; Trail, T.; Lewis, D.; Lopez, S. Examining the relationship among student perception of support, course satisfaction, and learning outcomes in online learning. Internet High. Educ. 2011, 14, 158–163. [Google Scholar] [CrossRef]
  15. Rubin, B.; Fernandes, R.; Avgerinou, M.D. The effects of technology on the Community of Inquiry and satisfaction with online courses. Internet High. Educ. 2013, 17, 48–57. [Google Scholar] [CrossRef]
  16. Khalid, M.; Quick, D. Teaching presence influencing online students’ course satisfaction at an institution of higher education. Int. Educ. Stud. 2016, 9, 62–70. [Google Scholar] [CrossRef]
  17. Keeler, L.C. Student Satisfaction and Types of Interaction in Distance Education Courses. Ph.D. Thesis, Colorado State University, Fort Collins, CO, USA, 2006. Retrieved from Proquest Dissertations & Theses: Full text, Order No. 3233345. Available online: https://search.proquest.com/docview/305344216 (accessed on 4 March 2020).
  18. Arbaugh, J.B. Virtual classroom characteristics and student satisfaction with Internet-based MBA courses. J. Manag. Educ. 2000, 24, 32–54. [Google Scholar] [CrossRef]
  19. Khalid, M. Factors Affecting Course Satisfaction of Online Malaysian University Students. Ph.D. Thesis, Colorado State University, Fort Collins, CO, USA, 2014. Available online: https://dspace.library.colostate.edu/bitstream/handle/10217/88444/Khalid_colostate_0053A_12779.pdf (accessed on 4 March 2020).
  20. Garrison, D.R.; Anderson, T.; Archer, W. Critical inquiry in a text-based environment: Computer conferencing in higher education. Internet High. Educ. 2000, 2, 87–105. [Google Scholar] [CrossRef] [Green Version]
  21. Garrison, D.R.; Anderson, T.; Archer, W. The first decade of the community of inquiry framework: A retrospective. Internet High. Educ. 2010, 13, 5–9. [Google Scholar] [CrossRef]
  22. Bulu, S.T. Place presence, social presence, co-presence, and satisfaction in virtual worlds. Comput. Educ. 2012, 58, 154–161. [Google Scholar] [CrossRef]
  23. Artino, A.R. Motivational beliefs and perceptions of instructional quality: Predicting satisfaction with online training. J. Comput. Assist. Learn. 2008, 24, 260–270. [Google Scholar] [CrossRef]
  24. McKeough, A.; Lupart, J.; Marini, A. (Eds.) Teaching for Transfer: Fostering Generalization in Learning; Erlbaum Associates: Mahwah, NJ, USA, 1995. [Google Scholar]
  25. Baldwin, T.T.; Ford, J.K. Transfer of training: A review and directions for future research. Pers. Psychol. 1988, 41, 63–105. [Google Scholar] [CrossRef]
  26. De Grip, A.; Sauermann, J. The effect of training on productivity: The transfer of on-the-job training from the perspective of economics. Educ. Res. Rev. 2013, 8, 28–36. [Google Scholar] [CrossRef]
  27. Gegenfurtner, A.; Festner, D.; Gallenberger, W.; Lehtinen, E.; Gruber, H. Predicting autonomous and controlled motivation to transfer training. Int. J. Train. Dev. 2009, 13, 124–138. [Google Scholar] [CrossRef]
  28. Gegenfurtner, A.; Veermans, K.; Vauras, M. Effects of computer support, collaboration, and time lag on performance self-efficacy and transfer of training: A longitudinal meta-analysis. Educ. Res. Rev. 2013, 8, 75–89. [Google Scholar] [CrossRef]
  29. Grover, V.K. Identification of best practices in transfer of training in teacher education as perceived by teacher trainees. IJMSS 2015, 3, 147–163. [Google Scholar]
  30. Olsen, J.H. The evaluation and enhancement of training transfer. Int. J. Train. Dev. 1998, 2, 61–75. [Google Scholar] [CrossRef]
  31. Gegenfurtner, A. Motivation and transfer in professional training: A meta-analysis of the moderating effects of knowledge type, instruction, and assessment conditions. Educ. Res. Rev. 2011, 6, 153–168. [Google Scholar] [CrossRef]
  32. Gegenfurtner, A. Dimensions of motivation to transfer: A longitudinal analysis of their influences on retention, transfer, and attitude change. Vocat. Learn. 2013, 6, 187–205. [Google Scholar] [CrossRef]
  33. Quesada-Pallarès, C. Training transfer evaluation in the Public Administration of Catalonia: The MEVIT factors model. Proc. Soc. Behav. Sci. 2012, 46, 1751–1755. [Google Scholar] [CrossRef] [Green Version]
  34. Quesada-Pallarès, C.; Ciraso-Calí, A.; Pineda-Herrero, P.; Janer-Hidalgo, Á. Training for innovation in Spain. Analysis of its effectiveness from the perspective of transfer of training. In Working and Learning in Times of Uncertainly; Bohlinger, S., Haake, U., Helms, C., Toiviainen, H., Wallo, A., Eds.; Sense Publishers: Rotterdam, The Netherlands, 2015; pp. 183–195. Available online: http://eprints.whiterose.ac.uk/89399/1/Book-WPL-2015_-_Training%20for%20innovation%20in%20Spain%20%5Bchapter%2014%5D.pdf (accessed on 4 March 2020).
  35. Feixas, M.; Durán, M.M.; Fernández, I.; Fernández, A.; García-San Pedro, M.J.; Márquez, M.D.; Pineda, P.; Quesada, C.; Sabaté, S.; Tomàs, M.; et al. ¿Cómo medir la transferencia de la formación en Educación Superior?: El Cuestionario de Factores de Transferencia (How to measure transfer of training in Higher Education: The questionnaire of transfer factors). Rev. Docencia Univ. 2013, 11, 219–248. Available online: http://red-u.net/redu/documentos/vol11_n3_completo.pdf (accessed on 4 March 2020). [CrossRef] [Green Version]
  36. García-Jiménez, E. La evaluación del aprendizaje: De la retroalimentación a la autorregulación. El papel de las tecnologías (Evaluation of learning: From feedback to self-regulation. The role of technologies). Rev. Electron. Investig. Eval. Educ. 2015, 21, M2. [Google Scholar] [CrossRef] [Green Version]
  37. Grohmann, A.; Beller, J.; Kauffeld, S. Exploring the critical role of motivation to transfer in the training transfer process. Int. J. Train. Dev. 2014, 18, 84–103. [Google Scholar] [CrossRef]
  38. Pineda-Herrero, P.; Ciraso-Calí, A.; Quesada-Pallarès, C. ¿Cómo saber si la formación genera resultados? El modelo FET de evaluación de la transferencia (How to know if training generates results? The FET model of transfer evaluation). Cap. Hum. 2014, 292, 74–80. Available online: http://factorhuma.org/attachments_secure/article/11261/c427_el_modelo_fet.pdf (accessed on 4 March 2020).
  39. Pineda-Herrero, P.; Quesada-Pallarès, C.; Ciraso-Calí, A. Evaluating training effectiveness: Results of the FET model in the public administration in Spain. In Proceedings of the 7th International Conference on Researching Work and Learning, Shanghai, China, 4–7 December 2011. [Google Scholar]
  40. Kirkpatrick, D.L.; Kirkpatrick, J.D. Evaluating Training Programs: The Four Levels, 3rd ed.; Berrett-Koehler Publishers: Oakland, CA, USA, 2006. [Google Scholar]
  41. Singleton, K.K. Reimagining the Community of Inquiry Model for a Workplace Learning Setting: A Program Evaluation. Ph.D. Thesis, University of South Florida, Tampa, FL, USA, 2019. Retrieved from ProQuest Dissertations & Theses: Full text, Order No. 13814361. Available online: https://scholarcommons.usf.edu/etd/7944 (accessed on 26 May 2020).
  42. Moore, A.L.; Klein, J.D. Facilitating Informal Learning at Work. TechTrends 2020, 64, 219–228. [Google Scholar] [CrossRef]
  43. Riley, J. The Relationship between Job Satisfaction and Overall Wellness in Counselor Educators. Ph.D. Thesis, Capella University, Minneapolis, MN, USA, 2017. Retrieved from ProQuest Dissertations & Theses: Full text, Order No. 10623122. Available online: https://search.proquest.com/docview/1973267475 (accessed on 26 May 2020).
  44. Elliott, K.M.; Shin, D. Student satisfaction: An alternative approach to assessing this important concept. J. High. Educ. Policy Manag. 2002, 24, 197–209. [Google Scholar] [CrossRef]
  45. Sun, P.C.; Tsai, R.J.; Finger, G.; Chen, Y.Y.; Yeh, D. What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput. Educ. 2008, 50, 1183–1202. [Google Scholar] [CrossRef]
  46. Prendes-Espinosa, M.P.; Castañeda-Quintero, L.; Solano-Fernández, I.M.; Roig-Vila, R.; Aguilar-Perera, M.V.; Serrano-Sánchez, J.L. Validation of a Questionnaire on Work and Learning Habits for Future Professionals: Exploring Personal Learning Environments. RELIEVE 2016, 22, 6. [Google Scholar] [CrossRef] [Green Version]
  47. Torres-Gordillo, J.J.; Herrero-Vázquez, E.A. PLE: Entorno personal de aprendizaje vs. entorno de aprendizaje personalizado (PLE: Personal Learning Environment vs. Customised Environment for Individualised Learning). REOP 2016, 27, 26–42. [Google Scholar] [CrossRef] [Green Version]
  48. Torres-Gordillo, J.J.; Cobos-Sanchiz, D. Evaluación de la satisfacción de los participantes en e-Learning. Un estudio sobre formación en prevención de riesgos y medio ambiente (Assessment of participants’ satisfaction with e-learning: A study on risk prevention and environment training). Cult. Educ. 2013, 25, 109–122. [Google Scholar] [CrossRef]
  49. Asare, S.; Ben-Kei, D. Factors influencing response rates in online student evaluation systems: A systematic review approach. J. Interact. Learn. Res. 2018, 29, 133–144. [Google Scholar]
  50. Rose, M. What are Some Key Attributes of Effective Online Teachers? J. Open Flex. Distance Learn. 2018, 22, 32–48. [Google Scholar]
  51. Balladares, J. Diseño pedagógico de la educación digital para la formación del profesorado (Instructional design of digital education for teacher training). Rev. Latinoamer. Tecnol. Educ. 2018, 17, 41–60. [Google Scholar]
  52. Boomsma, A. Reporting Analyses of Covariance Structures. Struct. Equ. Modeling 2000, 7, 461–483. [Google Scholar] [CrossRef]
  53. Bollen, K.A. Structural Equations with Latent Variables; Wiley: New York, NY, USA, 1989. [Google Scholar]
  54. Hu, L.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Modeling 1999, 6, 1–55. [Google Scholar] [CrossRef]
  55. Marsh, H.W.; Hau, K.T.; Wen, Z. In search of golden rules: Comment on hypothesis testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu & Bentler’s (1999) findings. Struct. Equ. Modeling 2004, 11, 320–341. [Google Scholar]
  56. Ntoumanis, N. A self-determination approach to the understying of motivation in physical education. Brit. J. Educ. Psychol. 2001, 71, 225–242. [Google Scholar] [CrossRef]
  57. George, D.; Mallery, P. SPSS for Windows Step by Step: A Simple Guide and Reference; 11.0 update; Allyn & Bacon: Boston, MA, USA, 2003. [Google Scholar]
  58. Hutchins, H.M.; Burke, L.A.; Berthelsen, A.M. A missing link in the transfer problem? Examining how trainers learn about training transfer. Hum. Resour. Manag. 2010, 49, 599–618. [Google Scholar] [CrossRef]
Figure 1. Factor model for the online course evaluation questionnaire.
Figure 1. Factor model for the online course evaluation questionnaire.
Sustainability 12 04629 g001
Table 1. Item distribution.
Table 1. Item distribution.
FactorsItems
Pedagogical design1, 2, 3, 4, 5, 6, 7, 9
Tutor performance13, 14, 15, 16
Virtual environment design17, 18, 19, 20
Timing10, 11, 12
Transfer of learning8, 21, 22, 23, 24
Table 2. Descriptive statistics for items.
Table 2. Descriptive statistics for items.
ItemMeanStandard DeviationAsym.Kurtosis
1. The objectives developed met my needs.3.270.61−0.7892.212
2. The objectives of the course were fully met.3.250.60−0.6371.839
3. The content of the course corresponded to the proposed objectives.3.260.57−0.4701.789
4. There was clear coordination between the content sections of the course.3.270.57−0.2690.656
5. The content was presented clearly.3.230.64−0.5780.916
6. The method used in the course was adequate for acquiring the desired competences.3.200.63−0.5411.046
7. The method used involved the application of the acquired knowledge.3.150.61−0.4911.441
8. The method that was developed involved the resolution of real problems in my professional practice.3.170.61−0.4340.696
9. The course materials were useful and provided adequate information.3.150.60−0.5581.666
10. The duration of the course was adequate.3.120.68−0.3940.769
11. The time scheduled in the course for working on course content was adequate.3.160.57−0.3960.874
12. The time scheduled in the course to execute the activities/tasks was adequate.3.140.57−0.3560.800
13. The course tutor used the results of the assessments to guide the learning of the course participants.3.130.57−0.5910.209
14. The tutor guided me according to my specific needs.3.350.67−0.7180.779
15. The tutor resolved existing doubts, using all available resources.3.340.70−0.9551.350
16. Communication with the tutor was fluid.3.060.59−0.9711.055
17. The structure of the platform was clear, logical, and well organized.2.930.65−0.6180.463
18. I could easily access the different sites of the course.2.850.61−0.6740.309
19. The communication channels available to course participants were adequate.3.010.54−0.6050.815
20. The resources offered by the platform were useful and sufficient to manage my self-learning.3.110.58−0.4010.312
21. The lessons learned in the course seemed easy to apply to my daily practice.3.290.64−0.3440.566
22. The course presented us with potential applications of the learning to our professional field.3.350.64−0.3161.651
23. I am applying or intend to apply partially/totally the content acquired in the course.3.320.61−0.2080.711
24. The course contributed to my professional development.3.290.61−0.3280.029
Table 3. Goodness-of-fit indices for factor models.
Table 3. Goodness-of-fit indices for factor models.
Modelχ2/d.f.CFIGFIRMRRMSEA (CI 90%)
Single factor10.8260.6930.6210.0350.145 (0.140–0.150)
Five factors3.1300.9360.8730.0190.067 (0.062–0.073)
Five factors with EC2.2720.9620.9110.0180.052 (0.046–0.058)
Note: CFI = comparative fit index; GFI = goodness-of-fit index; RMR = root mean square residual; RMSEA = root mean square error of approximation.
Table 4. Factor loadings for the five-factor model, including correlations between error terms.
Table 4. Factor loadings for the five-factor model, including correlations between error terms.
ItemsLoadingR2
Pedagogical design
1. The objectives developed met my needs.0.7540.569
2. The objectives of the course have been fully met.0.8230.677
3. The content of the course corresponded to the proposed objectives.0.8250.681
4. There was clear coordination between the content sections of the course.0.8430.711
5. The contents were presented clearly.0.7530.567
6. The method used in the course was adequate for acquiring the desired competences.0.7700.593
7. The method used involved the application of the acquired knowledge.0.7800.608
9. The course materials were useful and provided adequate information.0.7280.530
Tutor performance
13. The course tutor used the results of the assessments to guide the learning of the course participants.0.7620.581
14. The tutor guided me according to my specific needs.0.8880.789
15. The tutor resolved any existing doubts, using all available resources.0.9190.845
16. Communication with the tutor was fluid.0.8490.721
Virtual environment design
17. The structure of the platform was clear, logical, and well organized.0.7670.588
18. I could easily access the different course sites.0.7520.566
19. The communication channels available to course participants were adequate.0.8580.736
20. The resources offered by the platform were useful and sufficient to manage my self-learning.0.8790.773
Timing
10. The duration of the course was adequate.0.7390.546
11. The time scheduled in the course to work on course content was adequate.0.9580.918
12. The time scheduled in the course for the execution of activities/tasks was adequate.0.8940.799
Transfer of learning
8. The method developed involved the resolution of real problems in my professional practice.0.7150.511
21. The lessons learned in the course seemed easy to apply to my daily practice.0.5920.350
22. The course presented us with potential applications of the learning to our professional field.0.7000.490
23. I am applying or intend to apply partially/totally the content acquired in the course.0.6300.397
24. The course contributed to my professional development.0.6180.382
Note: R2 = Percentage of variance explained.
Table 5. Matrix of correlations between factors.
Table 5. Matrix of correlations between factors.
Pedagogical DesignTutor PerformanceVirtual Environment DesignTiming
Pedagogical design
Tutor performance0.684
Virtual environment design0.7280.733
Timing0.6160.4830.512
Transfer of learning0.7270.5620.6060.481
Table 6. Cronbach’s alpha values for subscales.
Table 6. Cronbach’s alpha values for subscales.
SubscalesNumber of ItemsCronbach’s Alpha
Pedagogical design80.93
Tutor performance40.91
Virtual environment design40.90
Timing30.89
Transfer of learning50.78

Share and Cite

MDPI and ACS Style

Rodríguez-Santero, J.; Torres-Gordillo, J.J.; Gil-Flores, J. Confirmatory Factor Analysis of a Questionnaire for Evaluating Online Training in the Workplace. Sustainability 2020, 12, 4629. https://doi.org/10.3390/su12114629

AMA Style

Rodríguez-Santero J, Torres-Gordillo JJ, Gil-Flores J. Confirmatory Factor Analysis of a Questionnaire for Evaluating Online Training in the Workplace. Sustainability. 2020; 12(11):4629. https://doi.org/10.3390/su12114629

Chicago/Turabian Style

Rodríguez-Santero, Javier, Juan Jesús Torres-Gordillo, and Javier Gil-Flores. 2020. "Confirmatory Factor Analysis of a Questionnaire for Evaluating Online Training in the Workplace" Sustainability 12, no. 11: 4629. https://doi.org/10.3390/su12114629

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop