1. Introduction
The importance of integrating research into pre-service teacher education is increasingly recognised, as there is growing evidence that research is a crucial factor in the teaching profession. The growing recognition of research has been highlighted by many authors (
Matjašič & Vogrinc, 2025;
Saqipi & Vogrinc, 2016;
Štemberger, 2020;
Molina-Torres, 2022) who have categorised the benefits of pre-service teachers’ research work as follows: (1) research enables pre-service teachers to critically and reflectively examine their own practise; (2) through research, pre-service teachers advance their professional development; (3) when pre-service teachers know how to conduct and use research, they can apply the results of educational research to improve their own practise. These benefits were also confirmed by
Van Katwijk et al. (
2023) and the authors of the
BERA-RSA (
2014) report, who found that research has a positive impact on learning outcomes and showed that pre-service teachers who actively engage in research are better able to apply evidence-based practises, critically evaluate their teaching methods and make informed decisions that benefit student learning. In addition,
Visser-Wijnveen et al. (
2012) found that linking research and teaching can enhance students’ awareness of authentic research processes and help them to adopt a critical academic disposition.
In educational research, a distinction can be made between perceived and actual research competence. Perceived research competence refers to self-assessed confidence and ability in conducting research, which is usually measured using self-report instruments. In contrast, actual research competence comprises objectively assessed skills and knowledge, which are often measured using knowledge tests (
Matjašič & Vogrinc, 2024). Moreover, recent studies show that the two are only weakly correlated and that students tend to overestimate their abilities (
Bauer et al., 2024;
Mamolo & Sugano, 2020). Despite this discrepancy, most empirical studies are cross-sectional and rely almost exclusively on self-reporting (
Matos et al., 2023), which means that in the literature, little is known about how the perceived and actual research competence of pre-service teachers develop during the master’s programme.
Given this context, this study aims to fill the gap in the literature regarding the development of perceived and actual research competence in pre-service teachers over time. In particular, the study investigates how these competencies develop during the first semester of a master’s programme. To guide this investigation, the following research questions were formulated:
RQ1: How do pre-service teachers rate their perceived research competence at the beginning of the master’s programme and at the end of the first semester?
RQ2: How does the actual research competence of pre-service teachers develop over the course of the master’s programme, especially from the beginning of the master’s programme to the end of the first semester?
RQ3: Are there significant differences between the perceived and actual research competence of pre-service teachers at the beginning of the master’s programme and at the end of the first semester?
3. Materials and Methods
3.1. Context of the Study
All participants in this study had previously completed an introductory research-based course during their undergraduate studies, usually taken in the second year of a four-year undergraduate degree programme. At the undergraduate level, they were introduced to the basics of educational research, including the role of theory, the formulation of basic research questions, and the preliminary design of simple empirical studies. They also learned basic statistical concepts relevant to education, such as data organisation, descriptive statistics (e.g., mean, median, and variance), correlation analysis (e.g., Pearson correlation) and basic non-parametric tests (e.g., Chi-square test for independence). In addition to these basics, students familiarised themselves with common methods of data collection (e.g., surveys, interviews, observations) and ethical considerations (e.g., participant consent, confidentiality). Although this gave them a broad overview of research methodology at the undergraduate level, the emphasis was on basic analytical techniques rather than in-depth application.
Building on this background, the compulsory research-based course offered during the master’s programme was designed to significantly enhance this knowledge and skill by enabling students to apply it in concrete contexts. Students went beyond the introductory undergraduate framework and explored more advanced research questions and statistical techniques. They formulated focused research problems and hypotheses, created questionnaires tailored to their own research interests, and learnt about survey methods, gaining hands-on experience with online survey tools and sampling methods suitable for different educational settings. They also learnt to apply both non-parametric and parametric statistical tests (e.g., t-tests, one-way ANOVA) using the Statistical Package for the Social Sciences (SPSS) and to critically interpret the results of these analyses. In addition, they dealt with different types of educational research such as action research and evaluation research and deepened their knowledge of the quality criteria for qualitative and quantitative research, as well as the ethical principles that apply to these different methodological approaches. By the end of the semester, they had not only practised identifying relevant theoretical frameworks, planning empirical research designs and conducting data analyses, but also applied this knowledge and these skills to concrete examples, making their methodological and statistical understanding more practise-oriented.
3.2. Participants
The participants of this study were master’s students enrolled in pre-service teacher education at the Faculty of Education, University of Ljubljana, in the academic year 2023/2024.
A total of 110 students participated in the initial data collection by completing the questionnaire on perceived research competence at the beginning of the semester. However, when the answers from the questionnaire before and after the first semester were combined, only 73 students provided valid identification codes that allowed for comparison across both time points. When the results of the knowledge tests before and after the first semester were combined, 64 students gave a valid identification code. Finally, comparing the students’ perceived research competence with their actual research competence reduced the sample size even further. A total of 58 students provided valid identification codes in both the questionnaires and the knowledge tests, so a complete analysis of perceived and actual research competence was possible. The discrepancies in sample size were mainly due to two factors: first, some participants chose not to participate in the entire research process, and second, there were inconsistencies in the identification codes, as several participants provided incomplete or incorrect codes, so their responses could not be linked across time points.
In order to maintain confidentiality and still match students’ responses across the different stages of the study, students created a unique identification code based on their parents’ personal details and home address. This anonymised coding procedure was used consistently for both the surveys and the knowledge tests.
3.3. Operationalising Perceived Research Competence
Perceived research competence was assessed using the validated questionnaire developed by
Matjašič and Vogrinc (
2025), which measures three key dimensions of research competence: research knowledge, research skills and research attitudes. Each dimension was measured using a series of items on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). The questionnaire was conducted as a web-based survey at two points in time: once at the beginning of the master’s programme and again at the end of the first semester. Both surveys were conducted during the lecture period to ensure the use of a consistent framework and reduce the likelihood of external influences. Students were also informed of the aims of the study, that their participation was voluntary and that their anonymity was guaranteed.
Given the complexity of research competence as a multifaceted construct, an exploratory factor analysis (EFA) using Varimax rotation and the minimum residual method (Minres) was used to analyse the latent structure of the questionnaire. Due to the ordinal nature of the Likert scale items used in the questionnaire, we also used polychoric correlations to estimate the relationships between the variables (
Holgado-Tello et al., 2010).
Before conducting the EFA, we assessed the suitability of our data using the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy and Bartlett’s test of sphericity for each dimension. The KMO measure for research knowledge was 0.71, above the recommended threshold of 0.60 (
Field, 2013), indicating good sampling adequacy. In addition, Bartlett’s test of sphericity was significant (
p < 0.001), confirming that the items were sufficiently correlated for factor analysis. For research skills, the KMO value was 0.76 and Bartlett’s test of sphericity showed a significant result (
p < 0.001). The same applies to research attitudes (KMO = 0.76, Bartlett’s Test of Sphericity
p < 0.001), which confirms the applicability of the EFA to our data.
In the next step, the EFA was used to determine whether the questionnaire items could be grouped into different factors corresponding to the theoretical dimensions of research competence, namely research knowledge, skills and attitudes. To determine the appropriate number of factors to be retained, we first conducted several analyses, such as the Scree test, parallel analysis, eigenvalue inspection and the Velicer minimum average partial test. Based on these analyses, we decided to retain three factors in each dimension for further analysis. In addition, for each factor in each dimension, only items with standardised factor loadings of 0.4 or higher were retained, which is in line with established guidelines (
Field, 2013;
Stevens, 2002). Finally, the reliability of each factor was assessed using Cronbach’s
α, resulting in satisfactory internal consistency (Cronbach’s
α > 0.6) (see
Appendix A).
While a factor analysis was conducted to address RQ1, for the purposes of addressing RQ3, we created a composite variable by averaging the scores of all items of the questionnaire, which represents the overall perceived research competence. Having a single variable to measure perceived research competence allowed us to compare it with actual research competence. A similar strategy was used by
Magnaye (
2022); perceived research competence was measured as a composite variable for the purpose of regression analysis. From a practical perspective, comparing perceived and actual research competence using a single composite variable reduces the complexity of the analysis.
Field (
2013) explained that working with multiple dimensions in comparison can lead to statistical complications, such as inflated error rates or multicollinearity.
3.4. Operationalising Actual Research Competence
One method with which to measure actual research competence more objectively is the use of knowledge tests (
Bauer et al., 2024;
Böttcher-Oschmann et al., 2021). Therefore, we developed a knowledge test that was administered at the beginning of the master’s programme (to establish a baseline of pre-service teacher actual research competence) and again at the end of the first semester to measure changes over time. In designing this test, we explicitly included items that reflected the content that had been introduced in the bachelor’s programme (e.g., basic statistical techniques and basic methodological concepts) and further developed in the master’s programme during the research-based course. The aim was to measure not only whether students remember basic concepts, but also whether they can apply them in specific educational contexts.
The knowledge test was designed to measure actual research competence through a combination of multiple-choice questions, short-answer questions and research-related tasks. It was also designed to measure the »establishing research«. The test included the following components:
A total of 14 true/false statements about statistics and research methodology, which measured participants’ understanding of research principles such as reliability, sampling, statistical knowledge and ethical considerations.
Three brief research scenarios, which were presented and from which participants had to select the most appropriate research method for each scenario from a range of given options.
The ability to select the most appropriate research design for the given research objectives, distinguishing between quantitative and qualitative approaches and specific types of research such as case studies or action research.
Instructions to formulate the research question for a given topic.
Instructions to formulate a hypothesis based on the formulated research question.
The ability to select the most appropriate research method and data collection technique to investigate the formulated research question and hypothesis.
Before we could carry out the knowledge test, we first had to validate it. Therefore, we sent the test to three experts in the field of research methodology and research competence, who assessed the content and face validity of the test. This step was taken to ensure reliability and accuracy in the assessment of actual research competence. A pilot study was then conducted with a small group of respondents to identify and clarify unclear or ambiguous questions, adjust the difficulty of the items and refine the test structure. Finally, we conducted an item analysis to assess the difficulty of the questions and the discriminatory power (D > 0.30, good discriminatory power) to ensure that the test can distinguish between different levels of research competence.
The test was scored quantitatively (maximum 22 points) so that we could calculate the total score for both time points. To answer RQ2, a composite variable was created by summing up the scores across all test items presenting actual research competence. Similar methods for assessing actual competence, in which composite variables were constructed from individual test components, have already been used in educational research (
Baartman & Ruijs, 2011).
Finally, as the knowledge test was administered at two different time points, we assessed its test–retest reliability by analysing the correlation between the composite variables, which confirmed the stability of the test over time.
3.5. Data Analysis
All analyses were performed in R using the packages lavaan, psych, EFA.dimensions and Rnest.
Before addressing RQ1, we tested the data for normality using the Kolmogorov–Smirnov test and examined Q–Q plots. The results indicated that the data for perceived research competence were not normally distributed (
p < 0.001), necessitating the use of non-parametric tests. Specifically, we used the Wilcoxon signed-rank test to assess changes in perceived research competence across the three dimensions (research knowledge, research skills, and research attitudes) at two time points. We also calculated the Hodges–Lehmann estimator to quantify the median increase for each dimension of perceived research competence (
Rosenkranz, 2010) and the effect size (rank- biserial correlation) to understand the practical significance of the observed changes (
van Doorn et al., 2020).
As for the actual research competence, the Kolmogorov–Smirnov test (p > 0.05) and the Q–Q plots showed a normal distribution, so RQ2 was addressed using a paired t-test to compare the results of the knowledge test at two time points. In addition, Cohen’s d was calculated to measure the practical significance of the results obtained.
Finally, in order to directly compare perceived and actual research competence (RQ3) measured on different scales, we first normalised the composite scores using a min–max transformation that rescales the scores to a common range (0–1). This approach preserves the relative differences between the observations and enables meaningful comparisons across measures. After normalisation, a Wilcoxon signed-rank test (due to the non-normality of the composite scores, as shown by the Kolmogorov–Smirnov test and the Q–Q plots) was calculated with the Hodges–Lehmann estimator and the rank- biserialcorrelation as the effect size.
At this point, it should be noted that although the Wilcoxon signed-rank test is based on the ranks of the differences between paired observations, reporting both the median and the interquartile range (IQR) provides a clear and interpretable measure of central tendency and variability, especially given the non-normal distribution of the data. In addition, the inclusion of the Hodges–Lehmann estimator supports the interpretation as it provides a non-parametric estimate of the median difference between paired observations.
5. Discussion
The present study was designed to address the gaps in the literature related to the development of research competence in pre-service teachers through a pre-test–post-test research design that utilised web surveys and knowledge tests to measure both perceived and actual research competence over time. The results of this study offer several original insights into the development of research competence among pre-service teachers and are presented below.
RQ1: How do pre-service teachers rate their perceived research competence at the beginning of the master’s programme and at the end of the first semester?
The results showed a significant and important increase in the rating of perceived research competence in almost all measured dimensions of perceived research competence. We found that pre-service teachers rated their perceived research competence significantly higher at the end of the first semester than at the beginning of the master’s programme. This suggests that their engagement with research tasks and assignments during the research-based course (and the master’s programme itself) contributed to a higher rating of their perceived research competence. In terms of research knowledge, the greatest changes were observed in the understanding of different research approaches. This improvement can be attributed to the comprehensive coverage of both qualitative and quantitative research methods in the course. For example, students were involved in designing, administering and analysing questionnaires, and applying their knowledge of research methods and statistical techniques. They developed research plans outlining their study objectives, sampling strategies and data collection methods, demonstrating their ability to translate theoretical knowledge into practical applications. We believe that these activities led to an improvement in their perceived research knowledge. In addition, pre-service teachers also rated their perceived research skills better at the end of the master’s programme. In particular, the ability to conduct statistical analyses and interpret data showed one of the largest increases in perceived competence. This could be due to the research activities pre-service teachers were exposed to during the course (e.g., using research findings to interpret and present data using the Statistical Package for the Social Sciences (SPSS)). Similarly, students reported greater confidence in research planning and in the preparation and presentation of research findings, which could be attributed to their active engagement in research-based activities and independent research assignments. In terms of attitudes towards research, the results were mixed. While positive attitudes towards research improved moderately and statistically significantly, changes in other aspects of students’ attitudes towards research were minimal, resulting in a modest overall change. This contrasts with the findings of previous studies by
Sizemore and Lewandowski (
2009), who reported that while students’ knowledge of research methods improved significantly after completing a research methods course, their attitudes towards research did not, and in some cases actually decreased, particularly in relation to the perceived benefits of research. Similarly,
Wessels et al. (
2018) emphasised that the demanding nature of research requires additional affective–motivational dispositions beyond cognitive competence and that students often face affective–motivational challenges during the research process. Our findings suggest that practical research experiences can foster positive research attitudes even if they do not affect other dimensions of research attitudes (e.g., evaluation of research). This suggests that while the cognitive benefits are evident, changing students’ attitudes towards research remains complex and may require pedagogical approaches that more directly address affective–motivational factors.
Overall, the results of our study are consistent with the findings of
Gussen et al. (
2023), who found that pre-service teachers experienced a significant increase in their self-assessed research competence in the cognitive domain at the end of the semester. Specifically, students felt that their perceived methodological skills and ability to reflect on research had improved, similarly to the observed increase in perceived research knowledge and skills in our study. In addition, they also found a decrease in affective–motivational aspects such as interest and enjoyment in research, which contrasts with our findings regarding students’ research attitudes. Furthermore,
Van der Linden et al. (
2012) found that students perceive conducting research and applying research findings to be more important (cognitive aspect) than appealing (emotional aspect); that they are more likely to express how important research is than to actually conduct it (behavioural aspect); and that they feel more competent in conducting research (self-efficacy) than they are enthusiastic about carrying it out or applying it. They suggest various approaches that can positively influence students’ attitudes towards research, such as using authentic research examples (showing students how research can directly impact teaching practice, and using research tasks that are directly related to the practical challenges faced by teachers) and collaborative learning.
RQ2: How does the actual research competence of pre-service teachers develop over the course of the master’s programme, especially from the beginning of the master’s programme to the end of the first semester?
Our results show a significant improvement in knowledge test scores, both statistically (
p < 0.001) and practically (
d = 0.98). In addition, students scored an average of 60% at the beginning of the semester, while they scored 72% at the end of the first semester. The improvement in knowledge test scores suggests that the research-based course in which the pre-service teachers participated successfully supported and increased their actual research competence. The research-based course structure gave students the opportunity to explore research methods and statistical techniques. This allowed them to gradually make a connection between theory and practical application as they learnt statistical methods for educational research—such as parametric and non-parametric tests—using SPSS. We believe that the course structure promoted critical thinking and a deeper understanding of research designs and methods, especially when critically analysing previously published master’s theses and scientific articles. Our findings are consistent with those of
Böttcher-Oschmann et al. (
2021), who showed that engaging students in authentic research activities where they go through the entire research process, apply statistical methods and critically analyse research designs leads to measurable gains in actual research competence. Furthermore,
Magnaye (
2022) found a significant relationship between pedagogical competence (classroom management and assessment) and research competence among pre-service teachers. According to him, practical skills and the ability to apply knowledge in authentic contexts are crucial for the development of research competence. Furthermore,
Albareda-Tiana et al. (
2018) found that pre-service teachers can develop and demonstrate actual research competence when they have the opportunity to engage in meaningful research activities (e.g., conducting research projects). Our findings are also supported by the findings of
Aspfors and Eklund (
2017), who found that when teacher education includes explicit research activities (e.g., research seminars and workshops in which students critically analyse existing studies and discuss methodological approaches), it strengthens pre-service teachers’ ability to effectively apply research methods in the educational context.
RQ3: Are there significant differences between the perceived and actual research competence of pre-service teachers at the beginning of the master’s programme and at the end of the first semester?
We found that the pre-service teachers tended to overestimate their actual research competence at the beginning of the master’s programme. More specifically, they rated their actual research competence higher than it actually was according to the knowledge test. This could be due to less practical experience or a limited understanding of how complex conducting educational research can be. However, as the semester progressed and they became more actively involved in the research-based course, as well as other master’s courses, their actual research competence improved, reflecting the skills and knowledge they had gained through the research-based teaching. This may suggest that as students became more research competent, they also developed a more realistic self-assessment of their abilities. This aligns with the findings of
Saqipi and Vogrinc (
2016), who found that pre-service teachers often focus more on the processes of research than on understanding the underlying objectives. Without adequate practical experience, students may have only a superficial idea of research activities, leading to an exaggerated self-perception. However, when they engage in authentic research experiences, their awareness of the challenges increases, leading to a re-evaluation of their research competence. In addition,
Böttcher-Oschmann et al. (
2021) observed that students initially overestimated their research competence, that their actual research competence improved the more they conducted research as part of their research-based projects and that their perceived research competence more closely matched their actual research competence.
Böttcher-Oschmann et al. (
2021) also discussed the phenomenon of response shift, where individuals adjust their internal standards and understanding when they have new experiences. This shift can lead to changes in self-evaluation that do not necessarily reflect actual changes in competence, but rather a deeper awareness of what competence means. In our study, the decrease in perceived competence despite an increase in actual research competence indicates such a response shift. As students became more immersed in research activities, they were able to develop a more nuanced understanding of the skills required, leading to a re-evaluation of their own skills.
6. Conclusions
This study makes several important contributions. One of the most important is the research design, which allowed us to measure and assess both perceived and actual research competence over time, thus providing a more comprehensive picture of the development of pre-service teachers’ research competence. First, we were able to measure and quantify the improvement in perceived and actual research competence over the course of the master’s programme. Our results show that pre-service teachers’ self-perceived competence improved significantly, and more importantly, that they improved their performance on the knowledge test by almost one standard deviation after participating in the research-based course. This shows that research-based teaching can significantly support the development of research competence. In addition, the study made it possible to uncover discrepancies between perceived and actual research competence, track how these constructs evolve with experience, and identify the phenomenon of response shift, in which students recalibrate their self-assessment standards as they gain practical experience. This finding suggests that practical experience causes pre-service teachers to better align their self-perceptions with their actual performance. By tracking these constructs over time, our study demonstrates that research-based teaching fosters both the development of research competence and a more accurate self-assessment process. The integration of research-based teaching into the course structure is another strength as it encouraged students to engage in research activities such as statistical analysis using SPSS, conducting literature reviews, working in teams, critically analysing published research, designing research projects and presenting their findings. This practical approach emphasises the importance of providing pre-service teachers with hands-on opportunities to engage with research, bridging the gap between academic knowledge and its practical application in an educational context. Furthermore, the use of validated measurement tools, including web-based questionnaires and knowledge tests, ensured the reliability and validity of the data collected, thus strengthening the methodological rigour of the study. Because the study captures both perceived and actual research competence, it provides a nuanced understanding of how these skills develop together, which has important implications for the design of pre-service teacher education curricula. For example, integrating objective assessments in conjunction with self-reflective practises can help pre-service teachers gain a more accurate understanding of their skills while promoting both competence and confidence in research. Finally, the study offers practical recommendations for the design of research-based teaching that prioritise the integration of research activities to ensure that students are both confident in their research skills and able to apply them effectively in educational contexts. These findings are not only valuable for improving pre-service teacher education, but also contribute to a wider discussion about promoting research competence in education professionals.
Study Limitations and Future Work
As with any research endeavour, some possible limitations of the present study must be taken into account. First, the sample size of the study decreased considerably from 110 to 58 participants when comparing perceived and actual research competence. To determine whether attrition from the initial 110 participants to the final 58 participants resulted in systematic bias, we compared available academic and study-related characteristics such as gender, age, field of study, and prior research experience between the initial and final samples using Chi-square tests. These analyses revealed no significant differences (p > 0.05), suggesting that the final sample was not systematically biassed by dropouts. Nevertheless, the reduction in sample size may have affected the statistical power and limited the generalisability of the results. Future research should adopt strategies to improve participant retention (such as predefined identification codes) and consider employing statistical techniques, such as multiple imputation or sensitivity analysis, to address missing data. Second, the study only examined perceived and actual research competence at two points in time—at the beginning and end of the first semester—which limits the ability to assess long-term changes in competence development. In future studies, a more comprehensive longitudinal design with additional measurement time points and, ideally, a control group could help to clarify the causal effects of research-based teaching on the development of research competence. Third, this study focussed on only one faculty. Although the findings are generalisable to this faculty and to other faculties offering pre-service teacher education programmes (at least in Slovenia and other countries with a similar system of teacher education), future studies should extend this line of research to multiple institutions to validate and generalise the findings. Fourth, external variables, such as motivation to study, which could influence both perceived and actual research competence, were not controlled for. Fifth, future studies should incorporate qualitative methods (e.g., interviews) to capture the cognitive and affective processes behind students’ evolving self-evaluations to further improve our understanding of response bias. These qualitative findings could demonstrate how exposure to authentic research experiences prompts students to critically reflect on and adjust their self-assessment standards. Finally, while the measurement instruments showed acceptable internal consistency and supported the factor structures through the EFA, the knowledge test primarily captured “establishing research”. Future studies should consider incorporating additional performance-based assessments to capture the multifaceted nature of research competence more comprehensively.