Consistency Analysis of Assessment Boards in University Entrance Examinations in Spain
Abstract
1. Introduction
1.1. Antecedents
1.2. The Present Study
2. Materials and Methods
2.1. Participants
2.2. Instruments
2.3. Procedure
2.4. Data Analysis
3. Results
4. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43, 561–573. [Google Scholar] [CrossRef]
- Baird, J.-A., Cresswell, M., & Newton, P. (2000). Would the real gold standard please step forward? Research Papers in Education, 15(2), 213–229. [Google Scholar] [CrossRef]
- Baird, J.-A., Godfrey-Faussett, T., Allan, S., Macintosh, E., Hutchinson, C., & Wiseman-Orr, L. (2024). Standards as a social contract in curriculum-based qualifications: Stakeholder views in Scotland. Cambridge Journal of Education, 54(4), 455–474. [Google Scholar] [CrossRef]
- Barkaoui, K. (2010). Variability in ESL essay rating processes: The role of the rating scale and rater experience. Language Assessment Quarterly, 7(1), 54–74. [Google Scholar] [CrossRef]
- Bejar, I. I. (2012). Rater cognition: Implications for validity. Educational Measurement, 31(3), 2–9. [Google Scholar] [CrossRef]
- Benton, T., & Elliot, G. (2016). The reliability of setting grade boundaries using comparative judgement. Research Papers in Education, 31(3), 352–376. [Google Scholar] [CrossRef]
- Benton, T., & Sutch, T. (2014). Analysis of the use of Key Stage 2 Data in GCSE predictions. Published Report. Ofqual. [Google Scholar]
- Blyth, K. (2014). Selection methods for undergraduate admissions in Australia: Does the Australian predominate entry scheme the Australian Tertiary admission Rank (ATAR) have a future? Journal of Higher Education Policy and Management, 36, 268–278. [Google Scholar] [CrossRef]
- Bond, T. G., & Fox, C. M. (2007). Applying the Rasch model: Fundamental measurement in the human sciences (2nd ed.). Erlbaum. [Google Scholar]
- Coe, R. (2007). Common examinee methods for monitoring the comparability of examination standards. In P. Newton, J. Baird, H. Goldstein, H. Patrick, & P. Tymms (Eds.), Techniques for monitoring the comparability of examination standards. Qualifications and Curriculum Authority. [Google Scholar]
- Coe, R. (2008). Comparability of GCSE examination in different subjects: An application of the Rasch model. Oxford Review of Education, 24(55), 609–636. [Google Scholar] [CrossRef]
- Coe, R. (2010). Understanding comparability of examination standards. Research Papers in Education, 25(3), 271–284. [Google Scholar] [CrossRef]
- Congdon, P. J., & McQueen, J. (2000). The stability of rater severity in large-scale assessment programs. Journal of Educational Measurement, 37(2), 163–178. [Google Scholar] [CrossRef]
- Cuxart-Jardí, A. (1996). Los sistemas de corrección de las pruebas de Selectividad en España [The grading systems of the university entrance examinations in Spain] [Unpublished draft report]. Universitat Pompeu Fabra. [Google Scholar]
- Cuxart-Jardí, A. (2000). Statistical models and assessment: Three studies in education. Revista de Educación, 323, 369–394. [Google Scholar]
- Cuxart-Jardí, A., & Longford, N. T. (1998). Monitoring the university admission process in Spain. Higher Education in Europe, 23(3), 385–396. [Google Scholar] [CrossRef]
- Cuxart-Jardí, A., Martí-Recober, M., & Ferrer-Julià, F. (1997). Algunos factores que inciden en el rendimiento y la evaluación en los alumnos de las pruebas de acceso a la universidad [Some factors related with students’ achievement and assessment at the University Entrance Examinations]. Revista de Educación, 314, 63–68. [Google Scholar]
- Eckes, T. (2005). Examining rater effects in TestDaF writing and speaking performance assessment: A multi-faceted Rasch analysis. Language Assessment Quarterly, 2(3), 197–221. [Google Scholar] [CrossRef]
- Eckes, T. (2009). Many-facet Rasch measurement. In S. Takala (Ed.), Reference supplement to the manual for relating language examinations to the Common European Framework of Reference for Languages: Learning, teaching, assessment (Section H). Council of Europe/Language Policy. Available online: https://rm.coe.int/1680667a23#search=eckes (accessed on 10 October 2024).
- Ehren, M. (2023). A conceptual framework for trust in standardised assessment: Commercial, quasi-market and national systems. European Journal of Education, 58(1), 11–22. [Google Scholar] [CrossRef]
- Engelhard, G., Jr. (1992). The measurement of writing ability with a many-facet Rasch model. Applied Measurement in Education, 5, 171–191. [Google Scholar] [CrossRef]
- Engelhard, G., Jr., & Wind, S. A. (2018). Invariance measurement with raters and rating scales. Routlege. [Google Scholar]
- Escudero-Escorza, T., & Bueno-García, C. (1994). Examen de selectividad. El estudio del tribunal paralelo [University entrance examination. The study of a parallel board]. Revista de Educación, 304, 281–298. [Google Scholar]
- European Commission/EACEA/Eurydice. (2009). National testing of pupils in Europe: Objectives, organization and use of results. Publications Office of the European Union. [Google Scholar] [CrossRef]
- Faura-Martínez, U., Lafuente-Lechuga, M., & Cifuentes-Faura, J. (2022). Territorial inequality in Selectivity? Analysing Mathematics in Social Sciences. Revista de Investigación Educativa, 40(1), 69–87. [Google Scholar] [CrossRef]
- Gaviria, J. L. (2005). La equiparación del expediente de Bachillerato en el proceso de selección de alumnos para el acceso a la universidad [The equivalence of the high school academic record in the university admission selection process]. Revista de Educación, 337, 351–387. [Google Scholar]
- Goberna, M. A., López, M. A., & Pastor, J. T. (1987). La predicción del rendimiento como criterio para el ingreso en la universidad [The prediction of achievement as a criterion to university access]. Revista de Educación, 283, 235–248. [Google Scholar]
- Guo, W., & Wind, S. A. (2021). Examining the impact of ignoring rater effects in mixed-format tests. Journal of Educational Measurement, 58(3), 364–387. [Google Scholar] [CrossRef]
- He, Q., Stockford, I., & Meadows, M. (2018). Inter-subject comparability of examination standards in GCSE and GCE in England. Oxford Review of Education, 44(4), 494–513. [Google Scholar] [CrossRef]
- Holmes, S. D., Meadows, M., Stockford, I., & He, Q. (2018). Investigating the comparability of examination difficulty using comparative judgment and Rasch modelling. Journal of Testing, 18(4), 366–391. [Google Scholar] [CrossRef]
- Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144. [Google Scholar] [CrossRef]
- Lamprianou, I. (2009). Comparability of examination standards between subjects: An international perspective. Oxford Review of Education, 35(2), 205–226. [Google Scholar] [CrossRef]
- Leckie, G., & Baird, J. A. (2011). Rater effects on essay scoring: A multilevel analysis of severity drift, central tendency, and rater experience. Journal of Educational Measurement, 48(4), 399–418. [Google Scholar] [CrossRef]
- Linacre, J. M. (1993). Many-facet Rasch measurement. MESA Press. [Google Scholar]
- Meadows, M., Baird, J.-A., Stringer, N., & Godfrey-Faussett, T. G. (2025). England’s qualifications system: The politics of resilience. Educational Assessment, Evaluation and Accountability, 37, 127–162. [Google Scholar] [CrossRef]
- Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). Focus article: On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1(1), 3–62. [Google Scholar] [CrossRef]
- Myford, C. M., & Wolfe, E. W. (2009). Monitorinig rater performance over time: A framework for detecting differencial accuracy and differencial scale category use. Journal of Educational Measurement, 46(4), 371–389. [Google Scholar] [CrossRef]
- Newton, P. (1997). Research papers in Education. Examining standards over time. Research Papers in Education, 12(3), 227–247. [Google Scholar] [CrossRef]
- Poskitt, J. (2023). Improve education provision in Aotearoa New Zealand by building assessment and learning capability. New Zealand Annual Review of Education, 28, 49–61. [Google Scholar] [CrossRef]
- Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. MESA Press. [Google Scholar]
- Robinson, C. (2007). Awarding examination grades: Current progresses and their evolution. In P. Newton, J. Baird, H. Goldstein, H. Patrick, & P. Tymms (Eds.), Techniques for monitoring the comparability of examination standards (pp. 97–123). QCA. [Google Scholar]
- Rodríguez-Menéndez, M. C., Inda, M. M., & Peña-Calvo, J. V. (2014). Rendimiento en la PAU y elección de estudios científico-tecnológicos en razón de género [Achievement in PAU and election of scientific-and technology studies based on gender]. Revista Española de Orientación y Psicopedagogía, 25(1), 111–127. [Google Scholar] [CrossRef]
- Ruiz, J., Dávila, P., Etxeberría, J., & Sarasua, J. (2011). Pruebas de selectividad en matemáticas en la UPC-EHU. Resultados y opiniones de los profesores [University Entrance Examinations in Mathematics at the UPC-EHU. Results and opinions from professors]. Revista de Educación, 362, 217–246. [Google Scholar]
- Stringer, N. S. (2012). Setting and maintaining GCSE and GCE grading standards: The case for contextualized cohort-referencing. Research Papers in Education, 27(5), 535–554. [Google Scholar] [CrossRef]
- Tomas, C., Whitt, E., Lavelle-Hill, R., & Severn, K. (2019). Modeling holistic marks with analytic rubrics. Frontiers in Education, 4, 89. [Google Scholar] [CrossRef]
- Veas, A., Benítez, I., Navas, L., & Gilar-Corbí, R. (2020). A comparative analysis of University Entrance Examinations using the construct comparability approach. Revista de Educación, 388, 65–83. [Google Scholar] [CrossRef]
- Volante, L., DeLuca, C., Barnes, N., Birenbaum, M., Kimber, M., Koch, M., Looney, A., Poskitt, J., Smith, K., & Wyatt-Smith, C. (2024). International trends in the implementation of assessment for learning revisited: Implications for policy and practice in a post-COVID world. Policy Futures in Education, 23(1), 224–242. [Google Scholar] [CrossRef]
- Watts, F., & García-Carbonell, A. (1999). Control de Calidad en la calificación de la prueba de inglés de selectividad [quality control in the scoring of the English Selectivity exam]. Aula Abierta, 73, 173–190. [Google Scholar]
- Wolfe, E. W., & McVay, A. (2012). Application of latent trait models to identifying substantively interesting raters. Educational Measurement, 31(3), 31–37. [Google Scholar] [CrossRef]
- Wolfe, E. W., & Moulder, B. C. (1999, April 19–23). Examining differential reader functioning over time in rating data: An application of the multifaceted Rasch rating scale model. Annual Meeting of the American Educational Research Association, Montreal, QC, Canada. [Google Scholar]
- Wolfe, E. W., Moulder, B. C., & Myford, C. M. (2001). Detecting differential rater functioning over time (DRIFT) using a Rasch multi-faceted rating scale model. Journal of Applied Measurement, 2, 256–280. [Google Scholar]
- Wright, B. D., & Linacre, J. M. (1994). Reasonable mean-square fit values. Rasch Measurement Transactions, 8, 370. [Google Scholar]
- Wu, M. L., Adams, R. J., Wilson, M. R., & Haldane, S. A. (2007). ACER ConQuest, version 2.0: Generalized item response modelling software. Australian Council for Educational Research. [Google Scholar]
- Yan, X., & Chuang, P.-L. (2023). How do raters learn t orate? Many-facet Rasch modeling of rater performance over the course of a rater certification program. Language Testing, 40(1), 153–179. [Google Scholar] [CrossRef]
- Zabala-Delgado, J. (2021). A mixed-methods approach to study the effects of rater training on the scoring validity of local university high-stakes writing tests in Spain. In B. Lanteigne, C. Coombe, & J. D. Brown (Eds.), Challenges in language testing around the world (pp. 383–406). Springer. [Google Scholar]
Province | University | Number of Boards | Number of Students | Number of Students Included in This Study |
---|---|---|---|---|
Alicante | (1) | 11 | 3306 | 522 |
(2) | 10 | 3403 | 509 | |
Valencia | (3) | 15 | 5848 | 883 |
(4) | 12 | 4885 | 752 | |
Castellon | (5) | 6 | 2248 | 334 |
Total | 54 | 19,690 | 3000 |
Subject | U. Alicante | U. Miguel-Hernandez | U. Valencia | Polytechnic School of Valencia | U. Jaume I |
---|---|---|---|---|---|
Freq (%) | Freq (%) | Freq (%) | Freq (%) | Freq (%) | |
CAS | 470 (17.5) | 469 (17.5) | 796 (29.7) | 637 (23.8) | 308 (11.5) |
HES | 470 (17.5) | 470 (17.5) | 797 (29.7) | 638 (23.8) | 308 (11.5) |
ING | 463 (17.1) | 460 (17.5) | 778 (29.7) | 627 (23.9) | 294 (11.2) |
MAT | 250 (17.8) | 234 (16.7) | 395 (28.1) | 352 (25.1) | 173 (12.3) |
MCS | 170 (16.1) | 170 (16.1) | 350 (33.1) | 243 (22.9) | 126 (11.9) |
VAL | 409 (17.2) | 320 (13.5) | 732 (30.8) | 617 (26.0) | 296 (12.5) |
Unweighted Fit | Weighted Fit | |||||||
---|---|---|---|---|---|---|---|---|
Trial | Severity | Standard Error | MNSQ | CI | T | MNSQ | CI | T |
1-1 | 0.253 | 0.054 | 1.05 | (0.62–1.38) | 0.3 | 1.02 | (0.59–1.41) | 0.1 |
1-2 | −0.179 | 0.055 | 0.92 | (0.60–1.40) | −0.3 | 0.93 | (0.58–1.42) | −0.3 |
1-3 | 0.476 | 0.052 | 1.13 | (0.64–1.36) | 0.7 | 1.13 | (0.62–1.38) | 0.6 |
1-4 | 0.032 | 0.052 | 1.01 | (0.61–1.39) | 0.1 | 1.04 | (0.59–1.41) | 0.2 |
1-5 | −0.345 | 0.053 | 1.04 | (0.61–1.39) | 0.3 | 0.99 | (0.59–1.41) | −0.1 |
1-6 | 0.478 | 0.055 | 0.95 | (0.60–1.40) | −0.2 | 0.92 | (0.59–1.41) | −0.4 |
1-7 | 0.373 | 0.057 | 1.08 | (0.59–1.41) | 0.4 | 0.98 | (0.55–1.45) | 0.0 |
1-8 | 0.459 | 0.052 | 1.02 | (0.60–1.40) | 0.2 | 0.99 | (0.58–1.42) | 0.0 |
1-9 | 0.555 | 0.053 | 0.97 | (0.63–1.37) | −0.1 | 0.99 | (0.60–1.40) | −0.1 |
1-10 | 0.624 | 0.061 | 0.92 | (0.47–1.53) | −0.2 | 0.91 | (0.44–1.54) | −0.3 |
1-11 | −0.944 | 0.059 | 1.11 | (0.54–1.46) | 0.5 | 1.09 | (0.52–1.48) | 0.3 |
2-1 | −0.576 | 0.054 | 1.03 | (0.62–1.38) | 0.2 | 1.07 | (0.60–1.40) | 0.3 |
2-2 | 0.363 | 0.058 | 1.03 | (0.52–1.48) | 0.2 | 1.06 | (0.51–1.49) | 0.2 |
2-3 | −0.187 | 0.06 | 1.01 | (0.54–1.46) | 0.1 | 1.02 | (0.48–1.52) | 0.0 |
2-4 | −0.102 | 0.059 | 0.99 | (0.55–1.45) | 0.1 | 0.98 | (0.53–1.47) | −0.1 |
2-5 | −0.593 | 0.055 | 1.14 | (0.61–1.39) | 0.7 | 1.06 | (0.58–1.42) | 0.3 |
2-6 | 0.005 | 0.054 | 0.99 | (0.63–1.37) | 0.0 | 0.99 | (0.59–1.41) | 0.0 |
2-7 | −0.313 | 0.053 | 1.07 | (0.65–1.35) | 0.4 | 0.98 | (0.63–1.37) | −0.1 |
2-8 | 0.163 | 0.052 | 0.98 | (0.65–1.35) | −0.1 | 0.93 | (0.63–1.37) | −0.4 |
2-9 | −0.418 | 0.054 | 1.04 | (0.64–1.37) | 0.3 | 1.05 | (0.61–1.39) | 0.2 |
2-10 | 0.047 | 0.053 | 1.07 | (0.64–1.36) | 0.4 | 1.10 | (0.62–1.38) | 0.5 |
3-1 | −0.472 | 0.055 | 1.13 | (0.50–1.40) | 0.7 | 1.09 | (0.58–1.42) | 0.5 |
3-2 | 0.244 | 0.052 | 0.99 | (0.64–1.36) | 0.0 | 0.96 | (0.62–1.38) | −0.2 |
3-3 | −0.089 | 0.055 | 0.84 | (0.61–1.39) | −0.8 | 0.87 | (0.58–1.42) | −0.7 |
3-4 | −0.178 | 0.050 | 0.88 | (0.66–1.34) | −0.6 | 0.87 | (0.65–1.35) | −0.7 |
3-5 | 0.126 | 0.056 | 1.16 | (0.62–1.38) | 0.9 | 1.11 | (0.59–1.41) | 0.6 |
3-6 | 0.068 | 0.050 | 1.08 | (0.67–1.33) | 0.5 | 1.06 | (0.64–1.36) | 0.4 |
3-7 | −0.235 | 0.048 | 1.13 | (0.68–1.32 | 0.8 | 1.10 | (0.66–1.34) | 0.6 |
3-8 | 0.153 | 0.052 | 1.10 | (0.63–1.37) | 0.6 | 1.06 | (0.60–1.40) | 0.2 |
3-9 | −0.537 | 0.050 | 1.04 | (0.67–1.33) | 0.3 | 1.04 | (0.65–1.35) | 0.2 |
3-10 | −0.049 | 0.053 | 0.95 | (0.63–1.37) | −0.2 | 0.89 | (0.61–1.39) | −0.5 |
3-11 | 0.276 | 0.051 | 0.95 | (0.65–1.35) | −0.2 | 0.95 | (0.64–1.36) | −0.2 |
3-12 | 0.187 | 0.053 | 1.06 | (0.62–1.38) | 0.4 | 0.96 | (0.59–1.41) | −0.2 |
3-13 | −0.185 | 0.054 | 0.94 | (0.64–1.36) | −0.3 | 0.97 | (0.62–1.38) | −0.1 |
3-14 | 0.661 | 0.056 | 1.03 | (0.60–1.40) | 0.2 | 1.00 | (0.58–1.42) | 0.0 |
3-15 | −0.107 | 0.054 | 1.04 | (0.63–1.37) | 0.3 | 1.03 | (0.62–1.38) | 0.2 |
4-1 | 0.035 | 0.052 | 1.16 | (0.65–1.35) | 0.9 | 1.11 | (0.62–1.38) | 0.6 |
4-2 | 0.093 | 0.052 | 0.98 | (0.64–1.36) | 0.0 | 0.93 | (0.61–1.39) | −0.3 |
4-3 | 0.307 | 0.050 | 1.08 | (0.65–1.35) | 0.5 | 1.07 | (0.63–1.37) | 0.4 |
4-4 | −0.386 | 0.054 | 1.08 | (0.65–1.35) | 0.5 | 1.05 | (0.61–1.39) | 0.3 |
4-5 | 0.316 | 0.050 | 1.00 | (0.69–1.31) | 0.1 | 0.94 | (0.66–1.34) | −0.3 |
4-6 | −0.127 | 0.052 | 1.05 | (0.67–1.33) | 0.4 | 1.03 | (0.64–1.36) | 0.2 |
4-7 | −0.121 | 0.053 | 1.16 | (0.66–1.34) | 1.0 | 1.08 | (0.62–1.38) | 0.4 |
4-8 | 0.658 | 0.050 | 1.16 | (0.65–1.35) | 0.9 | 1.12 | (0.61–1.39) | 0.5 |
4-9 | 0.074 | 0.054 | 1.04 | (0.63–1.37) | 0.3 | 0.96 | (0.61–1.39) | −0.2 |
4-10 | −0.156 | 0.057 | 0.99 | (0.58–1.42) | 0.0 | 0.97 | (0.54–1.46) | −0.1 |
4-11 | 0.698 | 0.048 | 1.05 | (0.66–1.34) | 0.3 | 1.02 | (0.65–1.35) | 0.1 |
4-12 | −0.197 | 0.053 | 1.01 | (0.64–1.36) | 0.1 | 0.97 | (0.61–1.39) | −0.2 |
5-1 | 0.085 | 0.049 | 1.06 | (0.69–1.31) | 0.4 | 1.00 | (0.65–1.35) | 0.0 |
5-2 | −0.427 | 0.053 | 1.01 | (0.62–1.38) | 0.1 | 1.00 | (0.60–1.40) | 0.1 |
5-3 | 0.725 | 0.055 | 1.05 | (0.58–1.42) | 0.3 | 0.98 | (0.56–1.44) | −0.1 |
5-4 | −1.551 | 0.053 | 1.11 | (0.63–1.37) | 0.6 | 1.04 | (0.61–1.39) | 0.2 |
5-5 | 0.161 | 0.053 | 0.96 | (0.60–1.40) | −0.1 | 0.91 | (0.58–1.42) | −0.4 |
5-6 | −0.222 | 0.389 | 1.05 | (0.63–1.37) | 0.3 | 1.01 | (0.60–1.40) | 0.0 |
Unweighted Fit | Weighted Fit | |||||||
---|---|---|---|---|---|---|---|---|
Trial | Severity | Standard Error | MNSQ | CI | T | MNSQ | CI | T |
CAS | −0.099 | 0.021 | 0.94 | (0.95–1.05) | −2.3 | 0.95 | (0.95–1.05) | −1.8 |
HES | −1.070 | 0.022 | 0.98 | (0.95–1.05) | −0.6 | 1.01 | (0.95–1.05) | 0.4 |
ING | −0.675 | 0.021 | 1.07 | (0.95–1.05) | 2.4 | 1.07 | (0.95–1.05) | 2.6 |
MAT | −0.015 | 0.024 | 0.96 | (0.93–1.07) | −1.1 | 0.99 | (0.91–1.09) | −0.2 |
MCS | 0.611 | 0.024 | 1.20 | (0.91–1.09) | 4.2 | 1.19 | (0.92–1.08) | 4.3 |
VAL | −0.736 | 0.097 | 0.95 | (0.94–1.06) | −1.7 | 0.96 | (0.94–1.06) | −1.3 |
(1) | (2) | (3) | (4) | (5) | |
---|---|---|---|---|---|
Subject | Mean (SD) | Mean (SD) | Mean (SD) | Mean (SD) | Mean (SD) |
CAS | −0.154 (0.387) | −0.075 (0.436) | −0.052 (0.462) | 0.224 (0.395) | 0.078 (0.704) |
HES | −0.061 (0.402) | −0.063 (0.578) | 0.072 (0.848) | −0.037 (0.595) | 0.145 (0.569) |
ING | 0.046 (0.602) | 0.008 (0.311) | −0.104 (0.412) | −0.041 (0.332) | 0.135 (0.824) |
MAT | 0.064 (0.710) | 0.055 (0.259) | 0.032 (0.428) | −0.153 (0.409) | 0.017 (0.800) |
MCS | 0.037 (0.700) | 0.063 (0.609) | 0.082 (0.377) | −0.201 (0.501) | 0.023 (0.587) |
VAL | 0.072 (0.937) | 0.734 (2.180) | 0.022 (0.435) | 0.266 (0.586) | 0.504 (0.551) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Published by MDPI on behalf of the University Association of Education and Psychology. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Veas, A.; López-Pina, J.-A. Consistency Analysis of Assessment Boards in University Entrance Examinations in Spain. Eur. J. Investig. Health Psychol. Educ. 2025, 15, 102. https://doi.org/10.3390/ejihpe15060102
Veas A, López-Pina J-A. Consistency Analysis of Assessment Boards in University Entrance Examinations in Spain. European Journal of Investigation in Health, Psychology and Education. 2025; 15(6):102. https://doi.org/10.3390/ejihpe15060102
Chicago/Turabian StyleVeas, Alejandro, and José-Antonio López-Pina. 2025. "Consistency Analysis of Assessment Boards in University Entrance Examinations in Spain" European Journal of Investigation in Health, Psychology and Education 15, no. 6: 102. https://doi.org/10.3390/ejihpe15060102
APA StyleVeas, A., & López-Pina, J.-A. (2025). Consistency Analysis of Assessment Boards in University Entrance Examinations in Spain. European Journal of Investigation in Health, Psychology and Education, 15(6), 102. https://doi.org/10.3390/ejihpe15060102