You are currently viewing a new version of our website. To view the old version click .
Medicina
  • Medicina is published by MDPI from Volume 54 Issue 1 (2018). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Elsevier.
  • Article
  • Open Access

26 May 2017

Validation of the EFFECT questionnaire for competence-based clinical teaching in residency training in Lithuania

,
,
,
,
,
and
1
Department of Preventive Medicine, Faculty of Public Health, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
2
Department of Neurology, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
3
Department of Surgery, Medical Academy, Lithuanian University of Health Sciences, Kaunas, Lithuania
4
Radboudumc Health Academy, Nijmegen, The Netherlands

Abstract

Background and aim: In 2013, all residency programs at the Lithuanian University of Health Sciences were renewed into a competency-based medical education curriculum. To assess the quality of clinical teaching in residency training, we chose the EFFECT (evaluation and feedback for effective clinical teaching) questionnaire designed and validated at the Radboud University Medical Centre in the Netherlands. The aim of this study was to validate the EFFECT questionnaire for quality assessment of clinical teaching in residency training. Materials and methods: The research was conducted as an online survey using the questionnaire containing 58 items in 7 domains. The questionnaire was double-translated into Lithuanian. It was sent to 182 residents of 7 residency programs (anesthesiology reanimathology, cardiology, dermatovenerology, emergency medicine, neurology, obstetrics and gynecology, physical medicine and rehabilitation). Overall, 333 questionnaires about 146 clinical teachers were filled in. To determine the item characteristics and internal consistency (Cronbach’s α), the item and reliability analyses were performed. Furthermore, confirmatory factor analysis (CFI) was performed using a model for maximum-likelihood estimation. Results: Cronbach’s α within different domains ranged between 0.91 and 0.97 and was comparable with the original version of the questionnaire. Confirmatory factor analysis demonstrated satisfactory model-fit with CFI of 0.841 and absolute model-fit RMSEA of 0.098. Conclusions: The results suggest that the Lithuanian version of the EFFECT maintains its original validity and may serve as a valid instrument for quality assessment of clinical teaching in competency-based residency training in Lithuania.

1. Introduction

For the delivery of high-quality patient care, high-quality clinical teaching of residents is essential [1,2]. Clinical teaching is accomplished in real-life situations through health care services for patients under strict control of a residents’ teacher. The quality of this process is crucially important to train young physicians who are able to provide evidence-based health care services, and to acquire necessary clinical skills, knowledge, and competencies [3,4,5].
Following the Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG 2015), universities have to review their programs on a regular basis ensuring their compliance with international aims meeting learners’ and social needs, especially on quality assurance [6]. After the Lithuanian University of Health Sciences renewed its residency programs according to the methodologies based on the intended learning outcomes and competencies (CBME), urgent need emerged to implement a quality evaluation system for clinical teaching based on scientific evidence [7]. There are many instruments available for the evaluation of clinical teaching [5,8,9]. One of these instruments is the EFFECT questionnaire (evaluation and feedback for effective clinical teaching), a theory-based, reliable, and valid instrument designed and validated by Fluit et al. from the Radboudumc Health Academy in the Netherlands [9].
This study aimed at validating the EFFECT questionnaire for quality assessment of clinical teaching in residency training.

2. Materials and Methods

The EFFECT questionnaire is based on theories of workplace learning and clinical teaching, and incorporates the Canadian Medical Education directives for Specialists (CanMEDS) competences [9]. The authors have validated the questionnaire following five sources of validity described by Downing [10,11]. Although EFFECT relies on an international literature study and is based on the theory that is internationally recognized as highly relevant to medical education, the authors claim the caution that is warranted in extrapolating their findings to other countries with different residency training programs and different feedback cultures as one of its possible limitations [12]. The aim of our study was to assess the validity of the Lithuanian version of EFFECT.
The EFFECT questionnaire consists of 58 items in 7 domains of clinical teaching: role modeling, task allocation, planning, providing feedback, teaching methodology, assessment, and personal support. The role modeling domain contains 4 subdomains: clinical skills, scholarship, general competencies, and professionalism. The items can be scored using a 6-point Likert scale (1, very poor; 2, poor; 3, intermediate; 4, satisfactory; 5, good; 6, excellent; and “not (yet) able to evaluate”). The option “not (yet) able to evaluate” was chosen if a specific item did not (yet) occur during clinical teaching. Having obtained the authors’ agreement to use the questionnaire, we made its double translation from Dutch to Lithuanian by 2 professional translators. In addition to the original items, information on gender, residency program, and the year of training was included.
To determine item characteristics, item means and standard deviation were calculated. For the assessment of internal consistency and reliability, the Cronbach’s alpha was calculated. Finally, structural equation modeling was applied to determine the amount of interdependency between items and constructs using the existing factorial solution as a model for maximum-likelihood estimation. In addition, common incremental measures of the scale fit in structural and equation modeling – the Comparative Fit Index (CFI) and Root Mean Square Error of Approximation (RMSEA) – were calculated [13,14]. Correlations between the dimensions were determined by correlation coefficients from the estimated covariance matrix. Correlation coefficients with a magnitude of 0.7–1.0 indicated interdependency of the factors. All the calculations were run with SPSS 20 and AMOS 20.
The study was approved by Bioethics Centre of the Lithuanian University of Health Sciences. The study was performed as an online survey.

3. Results

The survey data were collected during 2015–2016. A total of 182 residents (48 men and 134 women) were asked to fill in the EFFECT questionnaire about the teachers who were their supervisors within a residency program. The residents could decide how many teachers they wanted to evaluate not necessarily filling in the questionnaire for every teacher they worked with. We received a total of 333 questionnaires: 67.9% (n = 226) were completed by women and 32.1% (n = 107) by men. Description of the study population and the number of questionnaires filled in per residency program are presented in Table 1.
Table 1. Description of the study population and number of questionnaires filled-in per residency program.
The largest proportion (36.9%) of the questionnaires was filled in by first-year residents, followed by third-year (25.2%), second-year (24.3%), and fourth-year (13.5%) residents.
The results of the item characteristics are provided in Table 2. The items were rated on a 6-point Likert scale. The mean scores ranged from 4.58 (item 29, “reminds me of previously given feedback”, and item 50, “helps and advises me on how to maintain a good work-home balance”) up to 5.40 (the item 9, “applies to guidelines and protocols”). More than 20% of the answers in item 12 “have a bad news conversation”, item 40 “reviews my reports”, item 50 “helps and advises me on how to maintain a good work-home balance” were scored as “not (yet) able to evaluate”, while this proportion was over 70% for all the assessment domain items (51–58). Factor loadings varied from 0.788 (item 30) to 0.957 (item 74).
Table 2. Item characteristics.
The Cronbach’s alpha coefficients ranged from 0.91 to 0.97 indicating a high internal consistency of all subdomains (Table 3).
Table 3. Cronbach’s alpha of the domains.
The “role modeling scholarship” subdomain was not included into the confirmatory factorial analysis as it has only one item – “apply academic research results.” The items of the “assessment” domain were not included due to high proportion of “not (yet) able to evaluate” answers. Therefore, only 9 subdomains of the questionnaire were used in analysis.
The examination of the factorial structure of the questionnaire using confirmatory factor analysis resulted in a satisfactory model-fit. The comparative fit index CFI (0.841) means reached the area of a permissible model-fit. The absolute model-fit RMSEA of 0.098 revealed a moderate matching of the postulated factorial structure with the empirical data. The correlations between the factors varied from 0.699 to 0.916 (Table 4).
Table 4. Correlations between factors of the questionnaire.
Confirmatory factor analysis demonstrated that the 9 subscales of the EFFECT Lithuanian version corresponded to 9 different factors, which strongly correlated between themselves.

4. Discussion

The results of this study indicate that the Lithuanian version of the EFFECT questionnaire has acceptable psychometric properties and can be used for the evaluation of clinical teaching within residency training.
All the domains demonstrated a satisfactory reliability coefficient. However, residents indicated that part of the items could not be judged. This especially holds true for the “assessment” domain and some items of the role modeling domain (item 11 and 12). As portfolio assessment is not implemented at the moment, this explains why residents could not fill in these items.
For the “role modeling” items 11 and 12, it is possible that residents do not have much possibilities to observe their supervisors, for instance, when they bring bad news to their patients, or handle complaints and incidents. The same results were also found in the original studies conducted in the Netherlands, it could mean that residents just do these complex tasks, without having good examples in mind [9].
Compared with the original research, there are some differences in our survey that could influence the results [9].
First, the present replication study received fewer questionnaires (333 vs. 407). Nevertheless, the sample size exceeded the lower bound of 5 residents per item and, therefore, was regarded as big enough for a confirmatory factor analysis [15,16].
The original sample recruited its participants based on a department where they at the moment of the survey. In total, 6 departments in 4 different hospitals representing 3 specialties (pediatrics, pulmonary diseases, and surgery) were involved. This approach allowed the residents to evaluate all teachers of a specific department, thereby minimizing memory bias. The residents of our study were recruited based on the residency program, i.e., anesthesiology reanimathology, cardiology, dermatovenerology, emergency medicine, etc. They possibly had to evaluate clinical teachers from different clinical departments whom they met at a different timetable; therefore, this could result in memory bias due to different time of their encounter.
Another difference was the smaller number of domains included into confirmatory factorial analysis. As mentioned in Section 3, the “role modeling scholarship” domain was not included into confirmatory factorial analysis as it has only 1 item. However, we decided to leave it within the questionnaire as application of research results in training is one of the most important requirements [17].
The limitation of our study is related to the exclusion of the “assessment” domain from confirmatory factorial analysis (due to high proportion of “not (yet) able to evaluate” answer, ranging from 71.9% to 75.7% for different items) as it did not allow us to validate the full factorial structure of the EFFECT questionnaire. However, we need to keep this domain within the questionnaire as this is the weakest part of current clinical teaching, which has to be improved. It should be taken into consideration that CBME requires multifaceted assessment that embraces formative and summative approaches [18], the processes which are continuous and frequent [19]. It should also be noted that systematic training of clinical teachers on how to supervise residents in a competency-based curriculum has been started at the Lithuanian University of Health Sciences only recently. Therefore, in order to assess quality of clinical teaching using EFFECT, we will need to revalidate the questionnaire in its full structure once the “assessment” domain will become the daily practice in residency studies.

5. Conclusions

The results of this study indicate that the Lithuanian version of EFFECT has acceptable psychometric properties for evaluation of clinical teachers. Further research should be undertaken to examine the full factorial structure of EFFECT.

Conflicts of interest

The authors state no conflict of interest.

R E F E R E N C E S

  1. Leach, DC. Changing education to improve patient care. Qual Saf Health Care 2001, 10 (December (Suppl. 2)), ii54–8. [Google Scholar] [CrossRef][Green Version]
  2. Engbers, R; Caluwé, LIA; de Stuyt, PMJ; Fluit, CRMG; Bolhuis, S. Towards organizational development for sustainable high-quality medical teaching. Perspect Med Educ 2013, 2(February (1)), 28–40. [Google Scholar] [CrossRef] [PubMed]
  3. Da Dalt, L; Callegaro, S; Mazzi, A; Scipioni, A; Lago, P; Chiozza, ML; et al. A model of quality assurance and quality improvement for post-graduate medical education in Europe. Med Teach 2010, 32(2), e57–64. [Google Scholar] [CrossRef] [PubMed]
  4. Beckman, TJ; Cook, DA; Mandrekar, JN. What is the validity evidence for assessments of clinical teaching? J Gen Intern Med 2005, 20(December (12)), 1159–64. [Google Scholar] [CrossRef] [PubMed]
  5. Fluit, CRMG; Bolhuis, S; Grol, R; Laan, R; Wensing, M. Assessing the quality of clinical teachers. J Gen Intern Med 2010, 25(December (12)), 1337–45. [Google Scholar] [CrossRef] [PubMed]
  6. ESG_2015.pdf. Available from: http://www.enqa.eu/wp-content/uploads/2015/11/ESG_2015.pdf [cited 30.06.16].
  7. Scheele, F; Teunissen, P; Luijk, SV; Heineman, E; Fluit, L; Mulder, H; et al. Introducing competency-based postgraduate medical education in the Netherlands. Med Teach 2008, 30(January (3)), 248–53. [Google Scholar] [CrossRef] [PubMed]
  8. Beckman. Factor instability of clinical teaching assessment scores among general internists and cardiologists. Medical Education – Wiley Online Library, 2006. Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2929.2006.02632.x/full [cited 06.10.16].
  9. Fluit, C; Bolhuis, S; Grol, R; Ham, M; Feskens, R; Laan, R; et al. Evaluation and feedback for effective clinical teaching in postgraduate medical education: validation of an assessment instrument incorporating the CanMEDS roles. Med Teach 2012, 34(11), 893–901. [Google Scholar] [CrossRef] [PubMed]
  10. Downing, SM. Validity: on the meaningful interpretation of assessment data. Med Educ 2003, 37(9), 830–7. [Google Scholar] [CrossRef] [PubMed]
  11. Downing, SM. Reliability: on the reproducibility of assessment data. Med Educ 2004, 38(9), 1006–12. [Google Scholar] [CrossRef] [PubMed]
  12. Fluit, CRMG; Feskens, R; Bolhuis, S; Grol, R; Wensing, M; Laan, R. Understanding resident ratings of teaching in the workplace: a multi-centre study. Adv Health Sci Educ 2014, 20(3), 691–707. [Google Scholar] [CrossRef]
  13. Tyrimo_ir_įvertinimo_priemonių_patikimumo_ir_validumo_nustatymas.pdf. Available from: http://www.vu.lt/site_files/LD/Tyrimo_ir_%C4%AFvertinimo_priemoni%C5%B3_patikimumo_ir_validumo_nustatymas.pdf [cited 20.08.16].
  14. Confirmatory Factor Analysis for Applied Research – Timothy A. Brown – Google knygos [Internet]. Available from: https://books.google.lt/books?hl=lt&lrid=tTL2BQAAQBAJ&oi=fnd&pg=PP1&dq=Evaluating+the+use+of+confirmatory+factor+analysis&ots=ajVssP_Q5C&sig=ib_e56X4At8-fXNqqf8QOjSP3G4&redir_esc=y#v=onepage&q=RMSEA&f=false [cited 20.08.16].
  15. Terwee, CB; Bot, SDM; de Boer, MR; van der Windt, DAWM; Knol, DL; Dekker, J; et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol 2007, 60(1), 34–42. [Google Scholar] [CrossRef]
  16. Iblher, P; Zupanic, M; Ostermann, T. The Questionnaire DRECT German: adaptation and test theoretical properties of an instrument for evaluation of the learning climate in medical specialist training. GMS Z Med Ausbild 2015, 32(November (5)), Doc55. http://dx.doi.org/10.3205/zma000997. eCollection 2015.
  17. Bourgeois, JA; Hategan, A; Azzam, A. Competency-based medical education and scholarship: creating an active academic culture during residency. Perspect Med Educ 2015, 4(5), 254–8. [Google Scholar] [CrossRef] [PubMed]
  18. Hawkins, RE; Welcher, CM; Holmboe, ES; Kirk, LM; Norcini, JJ; Simons, KB; et al. Implementation of competency-based medical education: are we addressing the concerns and challenges? Med Educ 2015, 49(11), 1086–102. [Google Scholar] [CrossRef] [PubMed]
  19. Holmboe, ES; Sherbino, J; Long, DM; Swing, SR; Frank, JR; Collaborators for the IC. The role of assessment in competency-based medical education. Med Teach 2010, 32(8), 676–82. [Google Scholar] [CrossRef] [PubMed]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.