Next Article in Journal
Between Surviving and Thriving—New Approaches to Understanding Learning for Transformation
Previous Article in Journal
Resource-Person-Mediated Instruction and Secondary Students’ Learning Outcomes in Yorùbá Orature: A Culturally Responsive Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Psychometric Analysis of a Scale to Assess Education Degree Students’ Satisfaction with Their Studies

by
Verónica Guilarte
1,*,
Alicia Benarroch-Benarroch
1 and
Carmen Enrique-Mirón
2
1
Department of Science Education, University of Granada, 52071 Melilla, Spain
2
Department of Inorganic Chemistry, University of Granada, 52071 Melilla, Spain
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(6), 660; https://doi.org/10.3390/educsci15060660
Submission received: 26 February 2025 / Revised: 15 May 2025 / Accepted: 19 May 2025 / Published: 27 May 2025
(This article belongs to the Section Teacher Education)

Abstract

This study arises from the need for brief, valid, reliable and specific instruments to assess education degree students’ satisfaction (EDSS) with their studies. With this purpose, an EDSS Scale, consisting of 19 Likert-type items, was administered to a sample of 97 students from the Faculty of Education and Sport Science in Melilla (University of Granada, Spain). The validation of the scale was conducted through an Exploratory Factor Analysis, which identified four factors: (1) Theoretical training and teaching competences, (2) Academic support, (3) Practical training, and (4) Resources and facilities. These factors represent 63% of the total variance explained and present a Cronbach’s alpha value higher than 0.77, showing the good internal consistency of the instrument. Additionally, a confirmatory factor analysis (CFA) of the model was performed using AMOS V24, which confirmed the instrument’s structure. The CFA results indicate an acceptable model fit (CMIN/DF = 1.70, RMSA = 0.068, CFI = 0.878), and internal consistency was high (α y ω ≈ 1.00), further supporting the instrument’s validity and reliability. According to the results, the EDSS Scale appears to be a concise instrument with strong psychometric properties for empirically assessing students’ satisfaction with their studies in the field of education. This highlights its potential usefulness not only for Spanish higher education institutions but also for other European institutions.

1. Introduction

At present, higher education programmes in teacher training are expected to prepare educators who are fully equipped to meet the demands and needs of today’s schools and to positively influence student achievement (Appel, 2020).
Initial teacher education is the first stage in a teacher’s career that lays the foundations of a professional mindset, giving the trainee teacher a set of tools and knowledge for future teaching (Caena, 2014). As Darling-Hammond (2010) points out, we may be going through the best and worst historical times for teacher education. Never before have so many opportunities been placed in teacher education, nor have there been so many hopes and responsibilities from teacher education. The latest Teaching and Learning International Survey (TALIS) report (OECD, 2019, 2020) also emphasises the importance of preservice teacher education as the basis for teacher training and professional development. In this context, it is of the utmost importance to know the opinion and perception of education degree students regarding the theoretical and practical training received during their studies, as well as to investigate the training required to become a competent teacher in the 21st century. Therefore, it is essential to have appropriate instruments that allow for a dynamic application during and at the end of the studies. In this paper, we conduct the validation of a scale (or instrument) of short length with good psychometric qualities to measure students’ satisfaction with the academic and practical training received during their education degrees studies.

2. Theoretical Framework

2.1. Initial Teacher Training

In 2022, the Spanish Ministry of Education and Vocational Training presented a document containing 24 proposals for the improvement of the teaching profession. They focused, among other things, on preservice teacher training. The idea behind the proposals was learning by doing, within the framework of lifelong learning, as a way of improving performance in any profession. The document highlighted the current and critical social demand for improvement in teaching profession.
The new requirements to be a teacher, as well as the professional pressure on them (Edling & Simmie, 2020; Lubienski & Brewer, 2019), have increased significantly in recent years (Ministerio de Educación, Cultura y Deporte, 2014). Cinque and Rodríguez-Mantilla (2020) state the new demands that characterise 21st century society, including interculturality, growing student diversity, the importance of multilingualism, increased difficulty in learning scientific subjects, and the introduction of new technologies as teaching resources.
Furthermore, the results of international assessments on education systems’ performance (e.g., Programme for International Student Assessment: PISA and TALIS), along with research aimed at identifying the most influential variables in quality teaching (Hanushek et al., 2019), underscore the critical role of teachers in the development of student learning (Rasheed & Rashid, 2024). Moreover, the impact of teachers is shown to be more significant than other factors, such as material working conditions, lower student–teacher ratios, and financial resources.
Having competent teachers who are equipped to address emerging social challenges is one of the most significant challenges facing contemporary education systems (Fernández et al., 2016). Therefore, the initial training of preservice teachers should be designed to provide teachers with the fundamental knowledge and necessary competencies to perform their professional duties effectively (Caena, 2014). There is broad consensus that initial training education, along with the professional insertion period, are the most important factors to ensure effective teachers in classrooms.
Without being exhaustive, studies on initial teacher education have focused on the following topics:
Our study is focused on point 9, specifically students’ satisfaction with university experiences.
It is important to note that studies on student satisfaction have several limitations. First, they do not ensure an accurate measurement of learning quality, as they often reflect perceptions rather than actual educational outcomes. Furthermore, they tend to be subjective and susceptible to personal biases. These assessments may also be influenced by factors unrelated to pedagogy—such as workload or the student–teacher relationship. Their usefulness in driving substantive improvements is limited unless complemented by additional sources of information. Additionally, there is a risk of unreflective responses due to respondent fatigue, as well as the inappropriate use of such instruments as the sole basis for evaluating teaching performance (Langan & Harris, 2024; Marques et al., 2025), hence the importance of using brief and validated scales.

2.2. Students’ Satisfaction in Education

According to Pérez-Juste (2000), satisfaction is one of the more accepted quality dimensions in the different models proposed. It refers to those who are involved, that is, those who design and improve the product or service offered and those who are its users or recipients.
In the specific context of education, student satisfaction, given that students are the primary beneficiaries of the services offered during their initial training, serves as a key indicator used by universities to assess educational quality. From this perspective, students are regarded as the principal recipients or “clients” of the university and, therefore, their expectations as well as their perceptions become primary targets for improvement (Rasheed & Rashid, 2024). Student satisfaction can be defined as students’ sense of well-being when their academic expectations are fulfilled through the activities carried out by the institution (Surdez et al., 2018). In other words, it refers to the evaluation students make by comparing aspirations with their actual achievements. Its importance lies in the fact that student satisfaction enhances academic performance, promotes learning, and reduces student dropout rates (Lorenzo-Quiles et al., 2023; Medrano & Pérez, 2010). Additionally, it also strengthens the institution’s image and prestige.
Teaching quality, information quality, and service efficiency are among the factors that have a significant impact on student satisfaction in higher education institutions (Rasheed & Rashid, 2024). Academic support, including teacher behaviour, also influences the student experience (Bell, 2021; Geier, 2020) as well as students’ social experience (Ergun & Adıgüzel, 2023). Therefore, service quality and student satisfaction are closely linked and collectively contribute to fostering student loyalty to educational institutions (Borishade et al., 2021).
Gento-Palacios (2002) studied the importance of every component within the quality model for education institutions. He demonstrated that in most of the countries analysed, student satisfaction is the most important factor in determining quality. Understanding students’ satisfaction allows for the analysis of institutional effectiveness, as it identifies the positive and negative aspects, the latter being essential for devising strategies to improve education teaching (Kanwar & Sanjeeva, 2022).
For these reasons, research on student satisfaction is highly valuable, both during their studies and upon completion. The objectives of such research can be divided into those that investigate students’ general expectations with the university, and those that focus on specific components of the teaching–learning process. The latter aspect is of greater interest, as it provides information about preservice teachers’ perceptions of their training, both practical and theoretical.
In this context, it is necessary to highlight the importance of finding reliable ways to measure students’ satisfaction because it will allow universities to know their reality, compare it with other institutions, and analyse it in the long term.

2.3. Expectations of Trainee Teachers in the 21st Century

Several studies analyse students’ perceptions regarding the training received in education degrees. However, they mainly focus on the aspects they consider most relevant to learn, the competencies and skills to be developed, and the resources and tools to be used (Clark & Byrnes, 2015; Bozu & Aránega, 2017).
These studies aim to understand how training programmes can be improved to prepare future teachers and meet the present and future training demands of early childhood and primary school children. There is no doubt about the importance of considering students’ beliefs, values, and motivations when designing learning experiences (Buehl & Alexander, 2001).
According to students, some of the areas require more attention in preservice teacher training, including classroom management, particularly classroom environment and student behaviour; addressing the needs of students with special educational needs (Manso & Garrido-Martos, 2021); the acquisition of effective teaching techniques and strategies; and the ethical, legal, and professional responsibility of a teacher in the classroom (Clark & Byrnes, 2015). In this context, emotional and interpersonal variables, such as understanding, kindness, and compassion, are aspects highlighted by preservice teachers when describing the qualities of a ’good teacher’ (Weinstein, 1989). Additionally, preservice teachers stress the need for higher education institutions to integrate theory and practice in a continuous and cohesive manner (Stavridis & Papadopoulou, 2022). Conversely, the areas least valued by preservice teachers, or where they report feeling adequately prepared, include using reflection on teaching practice as a strategy for improvement, understanding children’s development, and working effectively with other teachers or school staff (Clark & Byrnes, 2015).
Regarding technological literacy, pre-service teachers consider themselves to have sufficient knowledge and skills to use technology in teaching activities (Dinçer, 2018; Boyraz & Rüzgar, 2024). However, different studies suggest that they have knowledge about information and communication technologies (ICT), but a low level of technological literacy, in terms of knowledge and skills for their integration in education to generate meaningful learning in school children (Bueno-Alastuey & Villarreal, 2021).
On the other hand, the COVID-19 health crisis revealed numerous shortcomings within the traditional face-to-face education system, including the scarcity of materials, limited access to ICT equipment, and the insufficient preparation of both university faculty and students for virtual learning environments (Mohammed et al., 2022; Moorhouse, 2024; Quispe-Prieto et al., 2021). This situation compelled higher education institutions to implement measures and support programmes aimed at assisting both educators and learners in meeting the evolving demands of the educational landscape. A similar phenomenon has emerged in recent years with the adoption of artificial intelligence, requiring preservice teachers to adapt to its use in their practical training as well as in the development of educational materials and resources (Moral-Sánchez et al., 2023). In this context, some studies have identified essential factors to be considered when designing a teacher preparatory programme focused on artificial intelligence (Sanusi et al., 2024).

2.4. Studies and Instruments to Measure Students’ Satisfaction

Most studies on the perception of undergraduate students in Spain are institutional studies carried out by universities and regional or national public institutions. A quantitative methodology, mainly based on surveys, is the most used method for data collection (López-Roldán & Fachelli, 2015). This technique offers the advantages of collecting information from a large sample. The objectives of the studies carried out by higher education institutions are diverse, but some subjects can be highlighted in most of them: the incorporation into the labour market (Agencia de Calidad y Prospectiva Universitaria de Aragón, 2018; Luque et al., 2015; Ministerio de Ciencia, Innovación y Universidades, 2019), students’ and graduates’ competencies and their suitability for labour market (Agencia de Calidad y Prospectiva Universitaria de Aragón, 2018), or students’ satisfaction with the university and its facilities (Luque et al., 2015).
Unlike the extensive body of literature on employability, studies focused on students’ opinions on their training are scarce. Furthermore, most of these studies additionally address many other factors, such as university degree choice, scholarships or grants, participation in mobility programmes, and simultaneous pursuit of studies and work, among other aspects (Servicio Integrado de Empleo de la Universidad Politécnica de Valencia, 2008).
Different studies have been conducted to measure university students’ satisfaction in different countries. In the review carried out by Mireles-Vázquez and García-García (2022), it is concluded that students’ satisfaction is a complex concept that includes academic aspects, facilities, and social relationships. The authors note that most instruments focus on teaching practice but neglect facilities and social relationships. However, it is necessary to analyse every parameter that involves students’ satisfaction. Other works, such as the study conducted by Kanwar and Sanjeeva (2022), focus on teacher effectiveness, academic environment, and facilities quality but overlook satisfaction with practical training, which is a crucial component of higher educational programmes.
According to Gil (2018), instruments designed to assess students’ perception of their university studies mainly focused on general satisfaction with theoretical and practical training, either during their studies or after graduation. This lack of specificity limits the ability to conduct a comprehensive analysis of various aspects related to student training, such as their perception of the teaching quality, teaching methodologies and activities, assessment criteria and instruments, course offerings and specialisation areas, or communication with teaching staff, among others.
Moreover, the questionnaire analysis is mainly descriptive, often aggregating results from a large number of students without providing in-depth examinations of the relationships between different variables. There are only a few examples of instruments for assessing students’ perception or satisfaction, such as the work by Ruiz-Gutiérrez et al. (2021), which developed and validated an instrument to assess the satisfaction of education trainees during practical training in primary and secondary schools.
Regarding university students’ satisfaction, Gento and Vivas (2003) designed and validated an instrument (SEUE survey). It consisted of 93 items, distributed into 10 groups, and included very different aspects, such as basic needs, offered services, teaching methods, personal achievements and self-fulfilment. Also noteworthy is the work of Mejías and Martínez (2009), who developed an instrument to measure student satisfaction with the Industrial Engineering degree programme at the University of Carabobo (Venezuela). For the instrument design, they conducted a content analysis of the models used by universities in Mexico, Puerto Rico, Peru, and Venezuela, identifying 52 items for student satisfaction in higher education. These items were distributed across four conceptual dimensions: teaching, academic organisation, university life, and university facilities and services. Additionally, Douglas et al. (2006) developed an instrument to measure student satisfaction at the Faculty of Business and Law at John Moores University, UK. The questionnaire (Likert-type) consisted of two sections, covering 57 questions, and focused on the university facilities and overall student satisfaction.
The Students’ Satisfaction Inventory, developed by Elliott and Shin (2002), is another comprehensive instrument, consisting of 116 items (Likert-type statements) that cover a wide range of college experiences grouped into 11 dimensions.
Some studies on satisfaction, particularly in higher education, have drawn from customer relations research in service enterprises. For example, the instrument known as SERVQUAL has been applied in many countries (e.g., Candelas et al., 2013). Candelas et al. (2013) identified six satisfaction dimensions, including academic, administrative, and complementary aspects, academic content, environment, and relationships.
While these instruments are extensive (i.e., containing more than 50 items), their length can limit completion rates among students, thereby affecting data quality. Hence, one of the main objectives of this study is to validate an instrument with a reduced number of items.
Furthermore, regarding student satisfaction in education degrees, it is key to assess satisfaction with the initial teacher training (i.e., satisfaction with practical training at early childhood and primary schools). This practical training constitutes a significant portion of the total hours in education degrees and represents students’ first work experience. As discussed in the previous section, this aspect is essential for students to acquire the necessary competencies to become teachers. Despite the importance of external practice, this aspect is not included in institutional satisfaction surveys as they do not consider the particularities of the different degrees.
In this context, and due to the scarcity of validated instruments to identify undergraduate students’ perceptions of education degrees, this paper addresses the validation of a scale (or instrument) of short length with good psychometric qualities to measure students’ satisfaction with the training received during undergraduate education studies. The initial hypothesis is that a shorter scale (i.e., with a reduced number of items) could yield good psychometric results in evaluating education degree students’ satisfaction.

3. Materials and Methods

3.1. Sample Characteristics

This study is designed as instrumental research because it responds to problems aimed at verifying the psychometric properties of an instrument (Montero & León, 2007). A non-probabilistic and purposive sample of 97 students from the Faculty of Science Education and Sport of Melilla (University of Granada) was selected, according to the following criteria: (1) all students were enrolled in an education degree (early childhood education, primary education or social education degrees); (2) students were in the third or the fourth year of their studies to guarantee they have a broader view of the education degree (i.e., first and second year students were excluded). This sample represents 22% of the total of 437 students (107 in early childhood education degree, 207 in primary education degree and 123 in social education degree) who were studying at the Faculty of Science Education and Sport of Melilla during the 2018–2019 academic year. The age ranges from 19 to 59 years, with an average age of 25.04 years (SD = 5.911). Their main characteristics in terms of gender, age range, cultural origin, city of birth, city of residence, university entrance route, and education degree choice, are shown in Table 1.

3.2. Instrument

The scale for assessing education degree students’ satisfaction with their studies (EDSS) was taken from the questionnaire developed by Herrera et al. (2021). This questionnaire, comprising 45 items, is organised into four sections: the first section (19 items) addresses various aspects of the undergraduate university experience; the second section (6 items) gathers information on students’ expectations of pursuing postgraduate studies; the third section (16 items) concerns work experience; and the fourth section (4 items) focuses on the alignment between employment and education studies. Additionally, the questionnaire includes several questions that collect socio-demographic and academic data. The content of this detailed questionnaire was validated by expert judgement, and the overall reliability of the instrument was 0.75 (Cronbach’s alpha). The questionnaire uses a variety of response formats, including four-point Likert scales, dichotomous items, multiple-choice questions, and open-ended responses.
For the research presented in this study, only question 13 from the first section of the previously described questionnaire was selected, as it directly pertains to the object of study: students’ satisfaction with their undergraduate studies. The options presented in this question (a total of 19) thus constitute the EDSS Scale. The EDSS Scale is therefore composed of 19 items, each with four Likert-type response options (ranging from 1: inadequate to 4: very adequate), where higher scores indicate a more positive evaluation of academic training (Appendix A). The internal consistency of the 19 items of the EDSS Scale was estimated through the Cronbach’s alpha and McDonald’s omega coefficients, obtaining acceptable values in both cases (0.917 and 0.914, respectively).

3.3. Procedure

The list of students for the 2018–2019 academic year was requested from the University of Granada General Secretary Office to ascertain the total population and thus estimate the sample size. The data were collected using an online form. All students were sent the questionnaire via Google Docs. The students, voluntarily, and after prior informed consent, filled it in along with their socio-demographic and academic data.
While applying the EDSS Scale, the ethical guidelines established in the Declaration of Helsinki were followed, along with the protocol approved by the Ethics Committee at the University of Granada (approval number 1556/CEIH/2020).

3.4. Data Analysis

For data analyses, a preliminary tabulation and analysis was carried out, removing the invalid surveys. The SPSS v.24 software was used for this task, as well as for the subsequent normality test, factor analysis, and statistically significant difference analysis.
Normality was assessed using the Kolmogorov–Smirnov test, with all variables yielding a significance level of 0.000, indicating that the behaviour of these variables did not follow the normality. An exploratory factor analysis (EFA) was then conducted using the Principal Component Analysis (PCA) with Varimax rotation and Kaiser normalisation, retaining items with a communality higher than 0.40. The data suitability to conduct the factor analysis was determined by the Kaiser–Meyer–Olkin (KMO) statistic and Barlett’s test of sphericity, using SPSS v.24. Subsequently, a confirmatory factor analysis (CFA) was carried out using AMOS version 24.

4. Results

4.1. Exploratory Factor Analysis (EFA)

The KMO sample adequacy showed a value of 0.850, and Bartlett’s test of sphericity was found to be significant (χ2 = 924.768; p ≤ 0.001). Therefore, the necessary conditions were met to perform the EFA and to obtain distinct and reliable factors (Glick et al., 2018; Duarte et al., 2019).
The reliability of the scale was then calculated using Cronbach’s alpha and McDonald’s omega coefficients. The overall values were 0.917 and 0.914, respectively. Thereafter, the instrument showed acceptable internal consistency and an appropriate factor distribution for measurements (Asare & Manoj, 2014). The total number of factors was determined based on eigenvalues higher than one (Kaiser, 1991). The PCA extraction method and the Varimax rotation method with Kaiser normalisation were used.
The rotated factorial matrix regrouped the 19 items into four factors, representing 62.78% of the total explained variance, with saturation loadings between 0.406 and 0.877. Likewise, the four factors presented eigenvalues higher than 1, and Cronbach’s alpha values for each dimension ranging from 0.772 to 0.864 (Table 2). These values are above the minimum required by Nunnally and Bernstein (1994). As for the McDonald omega coefficient values, the data ranged from 0.777 to 0.868 (Table 2), showing good internal consistency (McDonald, 1970). Table 3 shows the item loadings for each factor or dimension, along with the eigenvalues, the explained variance of each factor, and the cumulative explained variance.
The first factor comprises eight items and has been called ‘Theoretical training and teaching competence’ because the included items relate to theoretical aspects of teaching (Table 3). It is the most significant factor, as it explains 41.4% of the total variance of the students’ perceptions of the training received during their undergraduate studies. This factor is also the one with the highest reliability value (Cronbach’s alpha: 0.864; McDonald’s omega: 0.868).
The second factor is composed of five items and has been titled ‘Academic support’. The items included in this dimension refer to aspects related to counselling and guidance, courses and expertise, and teacher communication with students. This dimension explains 8.3% of the variance of the students’ training experiences. It also presents an acceptable reliability, exceeding the minimum reference value (Cronbach’s alpha: 0.772; McDonald’s omega: 0.777). These first two dimensions, together, contribute to explaining almost 50% of the total variance.
The third dimension or factor is composed of four indicators. This is ‘Practical training’, as it includes the more pragmatic aspects, such as practical training in educational institutions and preparation for future professional integration. This dimension contributes 7.4% to the explained variance and shows good reliability (Cronbach’s alpha: 0.802; McDonald’s omega: 0.810).
Finally, the fourth dimension, ‘Resources and facilities’, includes two items and contributes to explaining 5.6% of the variance in students’ training experiences. The reliability of this dimension is high (Cronbach’s alpha: 0.862; McDonald’s omega: 0.850).
Overall, the four dimensions together explain 62.7% of the variance in students’ perceptions of the training received during undergraduate education.

4.2. Confirmatory Factor Analysis (CFA)

Subsequently, a confirmatory factor analysis (CFA) was carried out with the aim of validating the instrument items’ hierarchical structure and verifying the relationships between the defined variables. This analysis is effective for checking the suitability and adequacy of a questionnaire as a measurement instrument. The CFA is shown in Figure 1, where the circles symbolise latent variables and the squares identify observed variables (Wong et al., 2017). Single-headed arrows imply an assumed direction of influence, and the double-headed arrows show the covariance between the five latent variables.
For the CFA, the CMIN (Cb, minimum value of the discrepancy) was carried out, which has a distribution χ2 (Table 4). Subsequently, following the guidelines offered by Lecerf and Canivez (2018), the comparative fit index (CFI), the Tucker–Lewis index (TLI), and the root mean square error of approximation (RMSEA) were analysed to assess the fitting between the covariance matrix of the data pre-specified by the model, according to the goodness-of-fit index.
According to the results obtained, the CFI, TLI, and RMSEA indices confirm that the latent factors analysed exhibit favourable internal consistency (Murphy et al., 2019). Following Morin et al. (2016), the values obtained in the CFI and TLI are acceptable when the values are close to 1 (Table 5). Although they do not reach the excellent threshold of 0.95, both values exceed the minimum acceptable range of 0.80-0.85, indicating a reasonable model fit. The IFI (0.884) also supports these results. However, the values of NFI (0.759) and RFI (0.686) are moderate, suggesting that the model could be improved, particularly if a stronger fit is expected in psychometric studies. As for the RMSEA, following Rigdon (1996), a value lower than 0.08 is considered acceptable, and if it is close to 0.06 it is considered relatively good (Table 6).
Finally, once the validation of the questionnaire with its items had been carried out and the CFA had been performed, the Cronbach’s alpha and McDonald’s omega coefficients were calculated again to assess the internal consistency of the instrument. Given that there were no modifications to its structure, the same values were obtained as specified above (Table 2). The results achieved in the different dimensions are close to 1.00, and it can be highlighted that the instrument is characterised by good reliability. The results obtained in the CFA reaffirm the structure of the instrument.

5. Discussion and Conclusions

The main objective of this research was to validate an instrument designed to measure students’ satisfaction with their initial academic training in education degree programmes, called the EDSS Scale. To this end, the psychometric properties of this scale were analysed through EFA and CFA.
The EDSS Scale was extracted from a broader questionnaire, the content of which was validated through expert judgement involving seven researchers from the University of Granada.
Most studies on students’ satisfaction are carried out at the institutional level, generally overseen by university quality assurance departments. These studies typically adopt a general approach and aim to generate data for monitoring and accreditation processes. However, few studies specifically examine students’ satisfaction with their initial university training. Therefore, the instruments used are generally lengthy (exceeding 50 items) and broad in scope (covering aspects such as scholarships, employability, or degree selection), which often leads to respondent fatigue and neglects the specific characteristics of individual degree programmes.
In contrast, the EDSS Scale is a concise instrument (19 items) that focuses on the key aspects of education degrees, such as the satisfaction with practical training in early childhood and primary education settings, among others.
The main theoretical categories of the EDSS Scale were identified through EFA, which revealed four factors: (1) Theoretical training and teaching competence, (2) Academic support, (3) Practical training, and (4) Resources and facilities. These factors explain 62.8% of the total variance and the results are consistent with those reported in previous studies. The model aligns with the comprehensive approach to student satisfaction proposed by Mireles-Vázquez and García-García (2022), or the study by Hok et al. (2021), as they encompass academic, infrastructural, and relational dimensions.
Furthermore, the instrument largely coincides with the framework proposed by Mejías and Martínez (2009), though two key differences were observed: (a) the practical training dimension in education degrees is shown as an independent factor and (b) university life did not emerge as a significant dimension within education degrees. Other factors, such as ’academic support’ and ’resources and facilities’ are also regarded as dimensions by Mejías and Martínez (2009), which highlight the importance of personal and caring attention for students in education degrees, as well as the availability of facilities and resources.
The importance of practical training has been highlighted in numerous studies, both national (Ruiz-Gutiérrez et al., 2021; Zabalza, 2011) and international (Clark & Byrnes, 2015; Darling-Hammond, 2010). This would explain its presence as an independent indicator of satisfaction.
Considering our results, the EFA demonstrated positive psychometric performance across all items, while the CFA indicated an adequate model fit (CFI = 0.878; TLI = 0.841; RMSEA = 0.068). Additionally, the instrument showed high internal reliability (α and ω ≈ 1.00), supporting its use as a valid and reliable tool.
In conclusion, the EDSS Scale represents a brief, valid, and reliable instrument for empirically assessing student satisfaction in education degree programmes. It offers a valuable tool for the continuous improvement of educational quality. As noted by Evers et al. (2016), initial teacher education is the first step toward professional practice, and the availability of appropriate assessment instruments is essential. The EDSS Scale contributes to this aim by providing concise and structured feedback, helping teacher education institutions gather specific insights into the effectiveness of their training programmes.

6. Limitations and Perspectives

The main limitations can be summarised as follows:
  • Sample size: it would be desirable to enlarge the sample size, so that it would be possible to improve the statistical analysis of the study. The reduced number of education students in each academic year at the Faculty of Science Education and Sports in Melilla campus (University of Granada) could be considered a hindrance for this purpose. In addition, when small samples are used, it is important to make sure not to draw general or hasty conclusions.
  • Type of study (cross-sectional study): a longitudinal study, where repeated measurements are carried out, would allow for assessment of the instrument’s reliability. Moreover, a longitudinal study would present interesting and valuable outcomes because academic training satisfaction is an ongoing process that requires satisfaction data throughout the entire training process.
Regarding future perspectives, it would also be interesting to validate the factor structure at other education faculties and universities, as well as in other undergraduate and even postgraduate studies toward greater generalizability of the instrument results. In future work, we aim to test the instrument’s applicability in the nursery and master’s degrees in teacher training for secondary education at Granada University (Melilla campus).

7. Implications for Practice

The EDSS Scale could help those in charge of degree programmes and teaching staff to gain an insight into the key dimensions that should be considered to enhance students’ satisfaction with their studies. In addition, this instrument allows professors, lecturers, and academic leaders to take into consideration the diverse aspects that impact students’ training processes, such as theory, practice, interpersonal relations, resources, and facilities. Based on the EDSS Scale results, university leaders will be able to adopt specific measures to improve the quality of degree programmes, identifying their weaknesses and strengths.

Author Contributions

Conceptualization, V.G. and A.B.-B.; methodology, C.E.-M.; formal analysis, C.E.-M.; investigation, V.G., A.B.-B. and C.E.-M., discussion, V.G. and A.B.-B.; writing—original draft preparation, V.G., A.B.-B. and C.E.-M.; writing—review and editing, V.G. and A.B.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This study was conducted as part of the project ICT Innovation for the Analysis of the Training and Satisfaction of Students and Graduates of Degrees in Early Childhood and Primary Education, and the Assessment of Their Employers: A Transnational Perspective (reference: B-SEJ-554-UGR20, 2021–2023), FEDER Andalusia 2014–2020 Operational Programme (R+D+I Projects). Funding was provided by the Unit of Excellence of the University Campus of Melilla, University of Granada, Spain (reference: UCE-PP2024-02).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee at the UNIVERSITY OF GRANADA. The approval number is 1556/CEIH/2020 (Approval date: 4 June 2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy reasons, as they are part of a transnational project.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ACPUAAgencia de Calidad y Prospectiva Universitaria de Aragón
CFAConfirmatory factor analysis
CFIComparative fit index
CMINMinimum value of discrepancy
DFDegree of freedom
EDSS ScaleEducation Degrees Students’ Satisfaction with the studies Scale
EFAExploratory factor analysis
GFIGoodness of fit index
ICTInformation and communication technologies
KMOKaiser Meyer Olkin
MCIUMinisterio de Ciencia, Innovación y Universidades
MECDMinisterio de Educación, Cultura y Deporte
NFINormed fit index
NPARNumber of parameters for each model
PCAPrincipal component analysis
PCKPedagogical content knowledge
PISAProgramme for International Student Assessment
RFIRelative fit index
RMSEARoot mean square error of approximation
SDStandard deviation
TALISTeaching and Learning International Survey
TLITucker–Lewis index

Appendix A. Instrument for the Assessment of Education Degree Student Satisfaction with Their Studies (EDSS)

Instructions: Please rate each of the following statements with the following values: 1 (inadequate); 2 (not very adequate); 3 (somewhat adequate); and 4 (very adequate).
  • Academic counselling and guidance
  • Degree programme
  • Theoretical training
  • Practical training
  • Mastery of teaching staff
  • Skills and competences development
  • Teaching methodologies and activities
  • Continuous evaluation of student work
  • Assessment criteria and tools
  • Quality of teaching, in general
  • Facilities (classrooms, seminars, laboratories, etc.)
  • Technological resources for teaching (audio-visual, multimedia, etc.)
  • Courses and expertise
  • Teacher Communication in classroom
  • Academic standards
  • Teacher attention outside the classroom (e.g., in tutorials).
  • Educational environment with other undergraduates
  • Practical work in other institutions (e.g., schools or other educational and social institutions)
  • Preparation for future professional integration

References

  1. Agencia de Calidad y Prospectiva Universitaria de Aragón. (2018). Encuesta sobre inserción laboral de egresados del Sistema Universitario de Aragón, avance de resultados. ACPUA. [Google Scholar]
  2. Appel, M. (2020). Performativity and the demise of the teaching profession: The need for rebalancing in Australia. Asia-Pacific Journal of Teacher Education, 48(3), 301–315. [Google Scholar] [CrossRef]
  3. Asare, M., & Manoj, S. (2014). Establishing validity and reliability of a health belief model and acculturation scale for measuring safe-sex and sexual communication behaviors among African immigrants for protecting against HIV/AIDS. Journal of Immigrant & Refugee Studies, 12, 191–209. [Google Scholar] [CrossRef]
  4. Bell, K. (2021). Increasing undergraduate student satisfaction in Higher Education: The importance of relational pedagogy. Journal of Further and Higher Education, 46(4), 490–503. [Google Scholar] [CrossRef]
  5. Borishade, T. T., Ogunnaike, O. O., Salau, O., Motilewa, B. D., & Dirisu, J. (2021). Assessing the relationship among service quality, student satisfaction and loyalty: The Nigerian higher education experience. Heliyon, 7(7), e07590. [Google Scholar] [CrossRef]
  6. Boyraz, S., & Rüzgar, M. (2024). What digital competency tells us about E-learning satisfaction of pre-service teachers. European Journal of Education, 59, e12766. [Google Scholar] [CrossRef]
  7. Bozu, Z., & Aránega, S. (2017). La formación inicial de maestros y maestras a debate: ¿qué nos dicen sus protagonistas? Profesorado Revista de Currículum y Formación del Profesorado, 21(1), 143–163. [Google Scholar] [CrossRef]
  8. Buehl, M. M., & Alexander, P. A. (2001). Beliefs about academic knowledge. Educational Psychology Review, 13(4), 385–418. [Google Scholar] [CrossRef]
  9. Bueno-Alastuey, M. C., & Villarreal, I. (2021). Pre-service teachers’ perceptions and training contributions towards ICT use. Estudios Sobre Educación, 41, 107–129. [Google Scholar] [CrossRef]
  10. Caena, F. (2014). Initial teacher education in Europe: An overview of policy issues. European Commission. Available online: https://www.cedefop.europa.eu/files/2_13_4_teacher_careers_en.pdf (accessed on 18 May 2025).
  11. Candelas, C., Gurruchaga, M., Mejías, A., & Flores, L. (2013). Medición de la satisfacción estudiantil universitaria: Un estudio de caso en una institución mexicana. Iberoamerican Journal of Industrial Engineering, 9(5), 261–274. [Google Scholar]
  12. Cantón-Mayo, I., Cañón-Rodríguez, R., & Arias-Gago, A. R. (2013). La formación universitaria de los maestros de Educación Primaria. Revista Interuniversitaria de Formación del Profesorado, 27(1), 45–63. [Google Scholar]
  13. Cinque, S., & Rodríguez-Mantilla, J. M. (2020). Las competencias instrumentales en los futuros Maestros de Educación Primaria: Autopercepción y satisfacción con la formación recibida en estudiantes de la UCM. Profesorado Revista de Currículum y Formación de Profesorado, 24(3), 309–333. [Google Scholar] [CrossRef]
  14. Clark, S. K., & Byrnes, D. (2015). What millennial preservice teachers want to learn in their training. Journal of Early Childhood Teacher Education, 36(4), 379–395. [Google Scholar] [CrossRef]
  15. Clark, S. K., & Newberry, M. (2019). Are we building preservice teacher self-efficacy? A large-scale study examining teacher education experiences. Asia-Pacific Journal of Teacher Education, 47(1), 32–47. [Google Scholar] [CrossRef]
  16. Darling-Hammond, L. (2010). Teacher education and the American future. Journal of Teacher Education, 61(1–2), 35–47. [Google Scholar] [CrossRef]
  17. De Juanas Oliva, Á., del Pozo, R. M., & Ballesteros, M. G. (2016). Competencias docentes para desarrollar la competencia científica en educación primaria. Bordón Revista de Pedagogía, 68(2), 103–120. [Google Scholar] [CrossRef]
  18. Dinçer, S. (2018). Are preservice teachers really literate enough to integrate technology in their classroom practice? Determining the technology literacy level of preservice teachers. Education and Information Technologies, 23(6), 2699–2718. [Google Scholar] [CrossRef]
  19. Douglas, J., Douglas, A., & Barnes, B. (2006). Measuring students satisfaction at a UK university. Quality Assurance in Education, 4(3), 251–267. [Google Scholar] [CrossRef]
  20. Duarte, E., Gouveia-Pereira, M., Gomes, H., & Sampaio, D. (2019). Social representations about the functions of deliberate self-harm: Construction and validation of a questionnaire for Portuguese adults. Behaviour Change, 36, 12–28. [Google Scholar] [CrossRef]
  21. Edling, S., & Simmie, G. M. (2020). Democracy and teacher education: Dilemmas, challenges and possibilities. Routledge. [Google Scholar]
  22. Elliott, K., & Shin, D. (2002). Student satisfaction: An alternative approach to assessing this important concept. Journal of Higher Education Policy and Management, 24(2), 197–206. [Google Scholar] [CrossRef]
  23. Ergun, E., & Adıgüzel, A. (2023). Recognizing predictors of students’ emergency remote online learning satisfaction during COVID-19. Education Sciences, 13(3), 269. [Google Scholar] [CrossRef]
  24. Evagorou, M., Dillon, J., Viiri, J., & Albe, V. (2015). Pre-service science teacher preparation in Europe: Comparing pre-service teacher preparation programs in England, France, Finland and Cyprus. Journal of Science Teacher Education, 26, 99–115. [Google Scholar] [CrossRef]
  25. Evers, A. T., Kreijns, K., & Van der Heijden, B. I. (2016). The design and validation of an instrument to measure teachers’ professional development at work. Studies in Continuing Education, 38(2), 162–178. [Google Scholar] [CrossRef]
  26. Fernández, M., Rodríguez, J., & Cruz, F. (2016). Evaluación de competencias docentes del profesorado para la detección de necesidades formativas. Bordón Revista de Pedagogía, 68(2), 85–101. [Google Scholar] [CrossRef]
  27. Fray, L., & Gore, J. (2018). Why people choose teaching: A scoping review of empirical studies, 2007–2016. Teaching and Teacher Education, 75, 153–163. [Google Scholar] [CrossRef]
  28. Geier, M. T. (2020). Students’ expectations and students’ satisfaction: The mediating role of excellent teacher behaviors. Teaching of Psychology, 48(1), 9–17. [Google Scholar] [CrossRef]
  29. Gento, S., & Vivas, M. (2003). El SEUE: Un instrumento para conocer la satisfacción de los estudiantes universitarios con su educación. Acción Pedagógica, 12(2), 16–27. [Google Scholar]
  30. Gento-Palacios, S. (2002). La evaluación de la satisfacción educativa en un enfoque de calidad institucional. In S. Castillo (Ed.), Compromisos de la Evaluación Educativa (pp. 353–391). Prentice Hall. [Google Scholar]
  31. Gil, P. (2018). Estudio sobre la inserción laboral de los egresados de la Universidad de Cantabria. Egresados del curso 2015–2016. Vicerrectorado de Ordenación Académica y Profesorado de la Universidad de Cantabria. [Google Scholar]
  32. Glick, P., Berdahl, J., & Alonso, A. (2018). Development and validation of the masculinity contest culture scale. Journal of Social Issues, 74, 449–476. [Google Scholar] [CrossRef]
  33. Han, S. W. (2018). Who expects to become a teacher? The role of educational accountability policies in international perspective. Teaching and Teacher Education, 75, 141–152. [Google Scholar] [CrossRef]
  34. Hanushek, E. A., Piopiunik, M., & Wiederhold, S. (2019). Do smarter teachers make smarter students? International evidence on teacher cognitive skills and student performance. Education Next, 19(2), 57–64. [Google Scholar]
  35. Herrera, L., Tomé, M., Perandones, T. M., & Sánchez-Sánchez, L. C. (2021). Estudio de seguimiento el alumnado universitario de grado y satisfacción de los empleadores. Análisis Cuantitativo. In O. Lorenzo (Ed.), Inserción profesional y seguimiento de egresados. Una perspectiva multicultural (pp. 137–195). Síntesis. [Google Scholar]
  36. Hok, T., Daengdej, J., & Vongurai, R. (2021). Determinants of student satisfaction on continuing education intention: A case study of private university in Cambodia. AU-GSB e-Journal, 14(2), 40–50. [Google Scholar]
  37. Kaiser, H. F. (1991). Coefficient alpha for a principal component and the Kaiser-Guttman rule. Psychological Reports, 68(3), 855–858. [Google Scholar] [CrossRef]
  38. Kanwar, A., & Sanjeeva, M. (2022). Student satisfaction survey: A key for quality improvement in the higher education institution. Journal of Innovation and Entrepreneurship, 11, 27. [Google Scholar] [CrossRef]
  39. Landon-Hays, M., Peterson-Ahmad, M. B., & Frazier, A. D. (2020). Learning to teach. Education Sciences, 10(7), 184. [Google Scholar] [CrossRef]
  40. Langan, A. M., & Harris, W. E. (2024). Metrics of student dissatisfaction and disagreement: Longitudinal explorations of a national survey instrument. Higher Education, 87, 249–269. [Google Scholar] [CrossRef]
  41. Lecerf, T., & Canivez, G. L. (2018). Complementary exploratory and confirmatory factor analyses of the French WISC-V: Analyses based on the standardization sample: Correction to Lecerf and Canivez (2018). Psychological Assessment, 30(8), 1009. [Google Scholar] [CrossRef]
  42. Llanes-Ordóñez, J., Méndez-Ulrich, J. L., & Montané-López, A. (2021). Motivación y satisfacción académica de los estudiantes de educación: Una visión internacional. Educación XX1, 24(1), 45–68. [Google Scholar] [CrossRef]
  43. Lorenzo-Quiles, O., Galdón-López, S., & Lendínez-Turón, A. (2023). Factors contributing to university dropout: A review. Frontiers in Education, 8, 1159864. [Google Scholar] [CrossRef]
  44. López-Roldán, P., & Fachelli, S. (2015). Metodología de la investigación social cuantitativa. Universidad Autónoma de Barcelona. [Google Scholar]
  45. Lubienski, C. A., & Brewer, T. J. (Eds.). (2019). Learning to teach in an era of privatization: Global trends in teacher preparation. Teachers College Press. [Google Scholar]
  46. Luque, T., del Barrio, S., Sánchez, J., Ibáñez, J., & Doña, L. (2015). Estudio de opinión de los Titulados de la Universidad de Granada. La inserción laboral en el Campus de Excelencia Internacional. BioTic. Año 2012. Fundación General UGR-Empresa CEI BioTic Granada. [Google Scholar]
  47. Manso, J., & Garrido-Martos, R. (2021). Los que empiezan: El reto pendiente de acompañar a docentes nóveles. Profesorado: Revista de Currículum y Formación del Profesorado, 25(2), 145–163. [Google Scholar] [CrossRef]
  48. Marques, F., Hernández-Leo, D., & Castillo, C. (2025). Beyond bias in student satisfaction surveys: Exploring the role of grades and satisfaction with the learning design. Journal of New Approaches in Educational Research, 14, 9. [Google Scholar] [CrossRef]
  49. McDonald, R. P. (1970). The theoretical foundations of Principal Factor Analysis and Alpha Factor Analysis. British Journal of Mathematical and Statistical Psychology, 23, 1–21. [Google Scholar] [CrossRef]
  50. Medrano, L. A., & Pérez, D. (2010). Adaptación de la escala de satisfacción académica a la población universitaria de Córdoba. Summa Psicológica UST, 7(2), 5–14. [Google Scholar] [CrossRef]
  51. Mejías, A., & Martínez, D. (2009). Desarrollo de un instrumento para medir la satisfacción estudiantil en educación superior. Docencia Universitaria, 10(2), 29–47. [Google Scholar]
  52. Ministerio de Ciencia, Innovación y Universidades. (2019). Inserción laboral de los egresados universitarios. Curso 2013–2014 (Análisis hasta 2018). Secretaría General Técnica del MCIU. [Google Scholar]
  53. Ministerio de Educación, Cultura y Deporte. (2014). TALIS: 2013. Estudio internacional de la enseñanza y el aprendizaje. Informe español. Available online: https://www.educacionfpydeportes.gob.es/inee/evaluaciones-internacionales/talis/talis-2013.html (accessed on 18 May 2025).
  54. Mireles-Vázquez, M. G., & García-García, J. A. (2022). Satisfacción estudiantil en universitarios: Una revisión sistemática de la literatura. Revista Educación, 46(2), 610–626. [Google Scholar] [CrossRef]
  55. Mohammed, L. A., Aljaberi, M. A., Amidi, A., Abdulsalam, R., Lin, C.-Y., Hamat, R. A., & Abdallah, A. M. (2022). Exploring factors affecting graduate students’ satisfaction toward e-learning in the era of the COVID-19 crisis. European Journal of Investigation in Health Psychology and Education, 12, 1121–1142. [Google Scholar] [CrossRef]
  56. Montero, I., & León, O. G. (2007). A guide for naming research studies in psychology. International Journal of Clinical and Health Psychology, 7, 847–862. [Google Scholar]
  57. Moorhouse, B. L. (2024). Beginning teaching during COVID-19: Newly qualified Hong Kong teachers’ preparedness for online teaching. Educational Studies, 50(4), 499–515. [Google Scholar] [CrossRef]
  58. Moral-Sánchez, S. N., Ruiz-Rey, F. J., & Cebrián-de-la-Serna, M. (2023). Análisis de chatbots de inteligencia artificial y satisfacción en el aprendizaje en educación. International Journal of Educational Research and Innovation, 20, 1–14. [Google Scholar] [CrossRef]
  59. Morin, A. J. S., Arens, A. K., Tran, A., & Caci, H. (2016). Exploring sources of construct-relevant multidimensionality in psychiatric measurement: A tutorial and illustration using the composite scale of morningness. International Journal of Methods in Psychiatric Research, 25(4), 277–288. [Google Scholar] [CrossRef] [PubMed]
  60. Murphy, K. L., Liu, M., & Herzog, T. A. (2019). Confirmatory factor analysis and structural equation modeling of socio-cultural constructs among Chamorro and non-Chamorro micronesian betel nut chewers. Ethnicity & Health, 24(6), 724–735. [Google Scholar] [CrossRef]
  61. Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. McGraw-Hill. [Google Scholar]
  62. OECD. (2019). TALIS 2018 results. Teachers and school leaders as lifelong learners. OCDE. [Google Scholar]
  63. OECD. (2020). TALIS 2018 results. Teachers and school leaders as valued professionals (Vol. II). OCDE. [Google Scholar]
  64. Olmeda, G. J., Guillén, M. T. F., & González, R. (2016). La formación inicial de los maestros de educación primaria en el contexto de la enseñanza bilingüe en lengua extranjera. Bordón Revista de pedagogía, 68(2), 121–135. [Google Scholar] [CrossRef]
  65. Orland-Barak, L., & Wang, J. (2021). Teacher mentoring in service of preservice teachers’ learning to teach: Conceptual bases, characteristics, and challenges for teacher education reform. Journal of Teacher Education, 72(1), 86–99. [Google Scholar] [CrossRef]
  66. Pérez, E., Serrano, R., & Pontes, A. (2019). Analysis of science and technology pre-service teachers’ beliefs on the construction of the teachers’ professional identity during the initial training process. Eurasia Journal of Mathematics, Science and Technology Education, 15(10), em1756. [Google Scholar]
  67. Pérez-Juste, R. (2000). La calidad de la educación. In R. Pérez, F. López, M. Peralta, & P. Municio (Eds.), Hacia una educación de calidad. Gestión, instrumentos y evaluación (pp. 13–44). Narcea. [Google Scholar]
  68. Pinto-Santos, A. R., Pérez-Garcías, A., & Darder-Mesquida, A. (2022). Development of teaching digital competence in initial teacher training: A systematic review. World Journal Educational Technology: Current Issues, 14(1), 1–15. [Google Scholar] [CrossRef]
  69. Quispe-Prieto, S., Cavalcanti-Bandos, M. F., Caipa-Ramos, M., Paucar-Caceres, A., & Rojas-Jiménez, H. H. (2021). A systemic framework to evaluate student satisfaction in Latin American universities under the COVID-19 pandemic. Systems, 9(1), 15. [Google Scholar] [CrossRef]
  70. Rasheed, R., & Rashid, A. (2024). Role of service quality factors in word of mouth through student satisfaction. Kybernetes, 53(9), 1854–2870. [Google Scholar] [CrossRef]
  71. Rigdon, E. (1996). CFI versus RMSEA: A comparison of two fit indexes for structural equation modelling. Structural Equation Modeling: A Multidisciplinary Journal, 3(4), 369–379. [Google Scholar] [CrossRef]
  72. Ruiz-Gutiérrez, B., Basilota, V., & González, P. (2021). Diseño y validación de un instrumento para la evaluación de la satisfacción de los estudiantes de prácticas en educación (ESEPE). Bordón, 73(1), 145–159. [Google Scholar] [CrossRef]
  73. Sanusi, I. T., Ayanwale, M. A., & Tolorunleke, A. E. (2024). Investigating pre-service teachers’ artificial intelligence perception from the perspective of planned behavior theory. Computers and Education: Artificial Intelligence, 6, 100202. [Google Scholar] [CrossRef]
  74. Servicio Integrado de Empleo de la Universidad Politécnica de Valencia. (2008). Formación y Empleo de los Titulados de la Universidad Politécnica de Valencia. Universidad Politécnica de Valencia. [Google Scholar]
  75. Stavridis, P., & Papadopoulou, V. (2022). The contribution of teaching practice to preservice teachers’ training—Empirical research of the department of primary education of western Macedonia University students’ evaluation. Educational Process: International Journal, 11(4), 92–111. [Google Scholar] [CrossRef]
  76. Surdez, E. G., Sandoval, M. del C., & Lamoyi, C. L. (2018). Satisfacción estudiantil en la valoración de la calidad educativa universitaria. Educación y Educadores, 21(1), 9–26. [Google Scholar] [CrossRef]
  77. Weinstein, C. S. (1989). Teacher education students’ preconceptions of teaching. Journal of Teacher Education, 40(2), 53–60. [Google Scholar] [CrossRef]
  78. Wong, P. K. S., Wong, D. F. K., Zhuang, X. Y., & Liu, Y. (2017). Psychometric properties of the AIR self-determination scale: The Chinese version (AIR SDS-C) for Chinese people with intellectual disabilities. Journal of Intellectual Disability Research, 61(3), 233–244. [Google Scholar] [CrossRef] [PubMed]
  79. Zabalza, M. A. (2011). El prácticum en la formación universitaria: Estado de la cuestión. Revista de Educación, 354, 21–43. [Google Scholar]
Figure 1. Confirmatory factor analysis diagram.
Figure 1. Confirmatory factor analysis diagram.
Education 15 00660 g001
Table 1. Socio-demographic characteristics of the participants.
Table 1. Socio-demographic characteristics of the participants.
FrequencyPercentage
Gender (n = 97)
Man919.6
Woman7880.4
Age (n = 96)
25 years or less7072.9
Between 26 and 30 years old 1616.6
31 years and over1010.5
Cultural origin (n = 95)
European6467.4
Amazight3031.5
Sephardic11.1
City of birth (n = 97)
Melilla8890.7
Another99.3
City of residence (n = 97)
Melilla9496.9
Another33.1
University entrance route (n = 97)
From baccalaureate (PAU)7678.4
From COU (University Orientation Course) and other previous systems 11.0
Entrance exams for over 25s11.0
From university degrees22.1
From foreign education systems11.0
From vocational training studies1616.5
Undergraduate degree (n = 97)
Degree in early childhood education2222.7
Degree in primary education4243.3
Degree in social education3334.0
Table 2. Reliability of the factors extracted from the scale.
Table 2. Reliability of the factors extracted from the scale.
F1F2F3F4
Cronbach’s alpha0.8640.7720.8020.862
McDonald’s omega0.8680.7770.8100.850
Table 3. Results of the exploratory factor analysis (factor loadings and percentage of explained variance).
Table 3. Results of the exploratory factor analysis (factor loadings and percentage of explained variance).
ItemsFactorsDimension
1234
Quality of teaching in general0.7540.1000.1530.302Theoretical training and teaching competence
Theoretical training0.7290.231−0.1040.190
Assessment criteria and tools0.6940.3660.2260.107
Teacher attention outside the classroom (e.g., tutorials)0.6690.1320.2190.188
Degree progamme0.6380.3320.2790.020
Skills and competences development0.5910.4810.2190.058
Mastery of teaching staff0.5350.3950.0980.183
Educational environment with other undergraduates0.510−0.0730.482−0.127
Teacher communication in classroom0.2940.687−0.0050.103Academic support
Academic counselling and guidance0.3490.6550.2960.070
Continuous evaluation of student work0.5430.6000.1560.073
Courses and expertise−0.0750.5860.1780.387
Academic standards0.2920.4910.3120.120
Practical work in other institutions (educational institutions, etc.)0.1160.1280.8670.041Practical training
Practical training0.0360.3510.7240.273
Preparation for future professional integration0.3180.1910.6040.315
Teaching methodologies and activities0.3890.3880.4060.224
Technological resources for teaching (audio-visual, multimedia, etc.)0.2740.0780.1610.877Resources and facilities
Facilities (classrooms, seminars, etc.)0.1780.1460.1090.848
Eigenvalues7.8751.5791.4141.062
% variance explained41.4488.3107.4415.588
% cumulative explained variance41.44849.75857.19962.787
Table 4. Fit statistics for EDSS scale: NPAR, CMIN, DF, P and CMIN/DF and GFI.
Table 4. Fit statistics for EDSS scale: NPAR, CMIN, DF, P and CMIN/DF and GFI.
ModelNPARCMINDFPCMIN/DFGFI
Default model63248.6981460.0001.7030.847
Saturated model2090.0000 1.000
Independence model 1031.0751900.0005.4270.000
NPAR: Number of parameters for each model, CMIN: Chi-square value, DF: Degree of Freedom, P: the probability of obtaining a discrepancy as large as the CMIN value if the respective model is correct, GFI: Goodness of Fit Index.
Table 5. Baseline comparison of the factor model: NFI, RFI, IFI, TLI, and CFI.
Table 5. Baseline comparison of the factor model: NFI, RFI, IFI, TLI, and CFI.
ModelNFI
Delta 1
RFI
Rho2
IFI
Delta2
TLI Rho2CFI
Default model0.7590.6860.8840.8410.878
Saturated model1.000 1.000 1.000
Independence model0.0000.0000.0000.0000.000
NFI: Normed Fit Index, RFI: Relative Fit Index, IFI: Incremental Fit Index, TLI: Tucker’ Lewis Index and CFI: Comparative Fit Index.
Table 6. Root mean square error of approximation.
Table 6. Root mean square error of approximation.
ModelRMSEA
Default model0.068
Independence model0.215
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guilarte, V.; Benarroch-Benarroch, A.; Enrique-Mirón, C. Psychometric Analysis of a Scale to Assess Education Degree Students’ Satisfaction with Their Studies. Educ. Sci. 2025, 15, 660. https://doi.org/10.3390/educsci15060660

AMA Style

Guilarte V, Benarroch-Benarroch A, Enrique-Mirón C. Psychometric Analysis of a Scale to Assess Education Degree Students’ Satisfaction with Their Studies. Education Sciences. 2025; 15(6):660. https://doi.org/10.3390/educsci15060660

Chicago/Turabian Style

Guilarte, Verónica, Alicia Benarroch-Benarroch, and Carmen Enrique-Mirón. 2025. "Psychometric Analysis of a Scale to Assess Education Degree Students’ Satisfaction with Their Studies" Education Sciences 15, no. 6: 660. https://doi.org/10.3390/educsci15060660

APA Style

Guilarte, V., Benarroch-Benarroch, A., & Enrique-Mirón, C. (2025). Psychometric Analysis of a Scale to Assess Education Degree Students’ Satisfaction with Their Studies. Education Sciences, 15(6), 660. https://doi.org/10.3390/educsci15060660

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop